Search Results: "olet"

31 December 2021

Chris Lamb: Favourite books of 2021: Fiction

In my two most recent posts, I listed the memoirs and biographies and followed this up with the non-fiction I enjoyed the most in 2021. I'll leave my roundup of 'classic' fiction until tomorrow, but today I'll be going over my favourite fiction. Books that just miss the cut here include Kingsley Amis' comic Lucky Jim, Cormac McCarthy's The Road (although see below for McCarthy's Blood Meridian) and the Complete Adventures of Tintin by Herg , the latter forming an inadvertently incisive portrait of the first half of the 20th century. Like ever, there were a handful of books that didn't live up to prior expectations. Despite all of the hype, Emily St. John Mandel's post-pandemic dystopia Station Eleven didn't match her superb The Glass Hotel (one of my favourite books of 2020). The same could be said of John le Carr 's The Spy Who Came in from the Cold, which felt significantly shallower compared to Tinker, Tailor, Soldier, Spy again, a favourite of last year. The strangest book (and most difficult to classify at all) was undoubtedly Patrick S skind's Perfume: The Story of a Murderer, and the non-fiction book I disliked the most was almost-certainly Beartown by Fredrik Bachman. Two other mild disappointments were actually film adaptions. Specifically, the original source for Vertigo by Pierre Boileau and Thomas Narcejac didn't match Alfred Hitchock's 1958 masterpiece, as did James Sallis' Drive which was made into a superb 2011 neon-noir directed by Nicolas Winding Refn. These two films thus defy the usual trend and are 'better than the book', but that's a post for another day.

A Wizard of Earthsea (1971) Ursula K. Le Guin How did it come to be that Harry Potter is the publishing sensation of the century, yet Ursula K. Le Guin's Earthsea is only a popular cult novel? Indeed, the comparisons and unintentional intertextuality with Harry Potter are entirely unavoidable when reading this book, and, in almost every respect, Ursula K. Le Guin's universe comes out the victor. In particular, the wizarding world that Le Guin portrays feels a lot more generous and humble than the class-ridden world of Hogwarts School of Witchcraft and Wizardry. Just to take one example from many, in Earthsea, magic turns out to be nurtured in a bottom-up manner within small village communities, in almost complete contrast to J. K. Rowling's concept of benevolent government departments and NGOs-like institutions, which now seems a far too New Labour for me. Indeed, imagine an entire world imbued with the kindly benevolence of Dumbledore, and you've got some of the moral palette of Earthsea. The gently moralising tone that runs through A Wizard of Earthsea may put some people off:
Vetch had been three years at the School and soon would be made Sorcerer; he thought no more of performing the lesser arts of magic than a bird thinks of flying. Yet a greater, unlearned skill he possessed, which was the art of kindness.
Still, these parables aimed directly at the reader are fairly rare, and, for me, remain on the right side of being mawkish or hectoring. I'm thus looking forward to reading the next two books in the series soon.

Blood Meridian (1985) Cormac McCarthy Blood Meridian follows a band of American bounty hunters who are roaming the Mexican-American borderlands in the late 1840s. Far from being remotely swashbuckling, though, the group are collecting scalps for money and killing anyone who crosses their path. It is the most unsparing treatment of American genocide and moral depravity I have ever come across, an anti-Western that flouts every convention of the genre. Blood Meridian thus has a family resemblance to that other great anti-Western, Once Upon a Time in the West: after making a number of gun-toting films that venerate the American West (ie. his Dollars Trilogy), Sergio Leone turned his cynical eye to the western. Yet my previous paragraph actually euphemises just how violent Blood Meridian is. Indeed, I would need to be a much better writer (indeed, perhaps McCarthy himself) to adequately 0utline the tone of this book. In a certain sense, it's less than you read this book in a conventional sense, but rather that you are forced to witness successive chapters of grotesque violence... all occurring for no obvious reason. It is often said that books 'subvert' a genre and, indeed, I implied as such above. But the term subvert implies a kind of Puck-like mischievousness, or brings to mind court jesters licensed to poke fun at the courtiers. By contrast, however, Blood Meridian isn't funny in the slightest. There isn't animal cruelty per se, but rather wanton negligence of another kind entirely. In fact, recalling a particular passage involving an injured horse makes me feel physically ill. McCarthy's prose is at once both baroque in its language and thrifty in its presentation. As Philip Connors wrote back in 2007, McCarthy has spent forty years writing as if he were trying to expand the Old Testament, and learning that McCarthy grew up around the Church therefore came as no real surprise. As an example of his textual frugality, I often looked for greater precision in the text, finding myself asking whether who a particular 'he' is, or to which side of a fight some two men belonged to. Yet we must always remember that there is no precision to found in a gunfight, so this infidelity is turned into a virtue. It's not that these are fair fights anyway, or even 'murder': Blood Meridian is just slaughter; pure butchery. Murder is a gross understatement for what this book is, and at many points we are grateful that McCarthy spares us precision. At others, however, we can be thankful for his exactitude. There is no ambiguity regarding the morality of the puppy-drowning Judge, for example: a Colonel Kurtz who has been given free license over the entire American south. There is, thank God, no danger of Hollywood mythologising him into a badass hero. Indeed, we must all be thankful that it is impossible to film this ultra-violent book... Indeed, the broader idea of 'adapting' anything to this world is, beyond sick. An absolutely brutal read; I cannot recommend it highly enough.

Bodies of Light (2014) Sarah Moss Bodies of Light is a 2014 book by Glasgow-born Sarah Moss on the stirrings of women's suffrage within an arty clique in nineteenth-century England. Set in the intellectually smoggy cities of Manchester and London, this poignant book follows the studiously intelligent Alethia 'Ally' Moberly who is struggling to gain the acceptance of herself, her mother and the General Medical Council. You can read my full review from July.

House of Leaves (2000) Mark Z. Danielewski House of Leaves is a remarkably difficult book to explain. Although the plot refers to a fictional documentary about a family whose house is somehow larger on the inside than the outside, this quotidian horror premise doesn't explain the complex meta-commentary that Danielewski adds on top. For instance, the book contains a large number of pseudo-academic footnotes (many of which contain footnotes themselves), with references to scholarly papers, books, films and other articles. Most of these references are obviously fictional, but it's the kind of book where the joke is that some of them are not. The format, structure and typography of the book is highly unconventional too, with extremely unusual page layouts and styles. It's the sort of book and idea that should be a tired gimmick but somehow isn't. This is particularly so when you realise it seems specifically designed to create a fandom around it and to manufacturer its own 'cult' status, something that should be extremely tedious. But not only does this not happen, House of Leaves seems to have survived through two exhausting decades of found footage: The Blair Witch Project and Paranormal Activity are, to an admittedly lesser degree, doing much of the same thing as House of Leaves. House of Leaves might have its origins in Nabokov's Pale Fire or even Derrida's Glas, but it seems to have more in common with the claustrophobic horror of Cube (1997). And like all of these works, House of Leaves book has an extremely strange effect on the reader or viewer, something quite unlike reading a conventional book. It wasn't so much what I got out of the book itself, but how it added a glow to everything else I read, watched or saw at the time. An experience.

Milkman (2018) Anna Burns This quietly dazzling novel from Irish author Anna Burns is full of intellectual whimsy and oddball incident. Incongruously set in 1970s Belfast during The Irish Troubles, Milkman's 18-year-old narrator (known only as middle sister ), is the kind of dreamer who walks down the street with a Victorian-era novel in her hand. It's usually an error for a book that specifically mention other books, if only because inviting comparisons to great novels is grossly ill-advised. But it is a credit to Burns' writing that the references here actually add to the text and don't feel like they are a kind of literary paint by numbers. Our humble narrator has a boyfriend of sorts, but the figure who looms the largest in her life is a creepy milkman an older, married man who's deeply integrated in the paramilitary tribalism. And when gossip about the narrator and the milkman surfaces, the milkman beings to invade her life to a suffocating degree. Yet this milkman is not even a milkman at all. Indeed, it's precisely this kind of oblique irony that runs through this daring but darkly compelling book.

The First Fifteen Lives of Harry August (2014) Claire North Harry August is born, lives a relatively unremarkable life and finally dies a relatively unremarkable death. Not worth writing a novel about, I suppose. But then Harry finds himself born again in the very same circumstances, and as he grows from infancy into childhood again, he starts to remember his previous lives. This loop naturally drives Harry insane at first, but after finding that suicide doesn't stop the quasi-reincarnation, he becomes somewhat acclimatised to his fate. He prospers much better at school the next time around and is ultimately able to make better decisions about his life, especially when he just happens to know how to stay out of trouble during the Second World War. Yet what caught my attention in this 'soft' sci-fi book was not necessarily the book's core idea but rather the way its connotations were so intelligently thought through. Just like in a musical theme and varations, the success of any concept-driven book is far more a product of how the implications of the key idea are played out than how clever the central idea was to begin with. Otherwise, you just have another neat Borges short story: satisfying, to be sure, but in a narrower way. From her relatively simple premise, for example, North has divined that if there was a community of people who could remember their past lives, this would actually allow messages and knowledge to be passed backwards and forwards in time. Ah, of course! Indeed, this very mechanism drives the plot: news comes back from the future that the progress of history is being interfered with, and, because of this, the end of the world is slowly coming. Through the lives that follow, Harry sets out to find out who is passing on technology before its time, and work out how to stop them. With its gently-moralising romp through the salient historical touchpoints of the twentieth century, I sometimes got a whiff of Forrest Gump. But it must be stressed that this book is far less certain of its 'right-on' liberal credentials than Robert Zemeckis' badly-aged film. And whilst we're on the topic of other media, if you liked the underlying conceit behind Stuart Turton's The Seven Deaths of Evelyn Hardcastle yet didn't enjoy the 'variations' of that particular tale, then I'd definitely give The First Fifteen Lives a try. At the very least, 15 is bigger than 7. More seriously, though, The First Fifteen Lives appears to reflect anxieties about technology, particularly around modern technological accelerationism. At no point does it seriously suggest that if we could somehow possess the technology from a decade in the future then our lives would be improved in any meaningful way. Indeed, precisely the opposite is invariably implied. To me, at least, homo sapiens often seems to be merely marking time until we can blow each other up and destroying the climate whilst sleepwalking into some crisis that might precipitate a thermonuclear genocide sometimes seems to be built into our DNA. In an era of cli-fi fiction and our non-fiction newspaper headlines, to label North's insight as 'prescience' might perhaps be overstating it, but perhaps that is the point: this destructive and negative streak is universal to all periods of our violent, insecure species.

The Goldfinch (2013) Donna Tartt After Breaking Bad, the second biggest runaway success of 2014 was probably Donna Tartt's doorstop of a novel, The Goldfinch. Yet upon its release and popular reception, it got a significant number of bad reviews in the literary press with, of course, an equal number of predictable think pieces claiming this was sour grapes on the part of the cognoscenti. Ah, to be in 2014 again, when our arguments were so much more trivial. For the uninitiated, The Goldfinch is a sprawling bildungsroman that centres on Theo Decker, a 13-year-old whose world is turned upside down when a terrorist bomb goes off whilst visiting the Metropolitan Museum of Art, killing his mother among other bystanders. Perhaps more importantly, he makes off with a painting in order to fulfil a promise to a dying old man: Carel Fabritius' 1654 masterpiece The Goldfinch. For the next 14 years (and almost 800 pages), the painting becomes the only connection to his lost mother as he's flung, almost entirely rudderless, around the Western world, encountering an array of eccentric characters. Whatever the critics claimed, Tartt's near-perfect evocation of scenes, from the everyday to the unimaginable, is difficult to summarise. I wouldn't label it 'cinematic' due to her evocation of the interiority of the characters. Take, for example: Even the suggestion that my father had close friends conveyed a misunderstanding of his personality that I didn't know how to respond it's precisely this kind of relatable inner subjectivity that cannot be easily conveyed by film, likely is one of the main reasons why the 2019 film adaptation was such a damp squib. Tartt's writing is definitely not 'impressionistic' either: there are many near-perfect evocations of scenes, even ones we hope we cannot recognise from real life. In particular, some of the drug-taking scenes feel so credibly authentic that I sometimes worried about the author herself. Almost eight months on from first reading this novel, what I remember most was what a joy this was to read. I do worry that it won't stand up to a more critical re-reading (the character named Xandra even sounds like the pharmaceuticals she is taking), but I think I'll always treasure the first days I spent with this often-beautiful novel.

Beyond Black (2005) Hilary Mantel Published about five years before the hyperfamous Wolf Hall (2004), Hilary Mantel's Beyond Black is a deeply disturbing book about spiritualism and the nature of Hell, somewhat incongruously set in modern-day England. Alison Harte is a middle-aged physic medium who works in the various towns of the London orbital motorway. She is accompanied by her stuffy assistant, Colette, and her spirit guide, Morris, who is invisible to everyone but Alison. However, this is no gentle and musk-smelling world of the clairvoyant and mystic, for Alison is plagued by spirits from her past who infiltrate her physical world, becoming stronger and nastier every day. Alison's smiling and rotund persona thus conceals a truly desperate woman: she knows beyond doubt the terrors of the next life, yet must studiously conceal them from her credulous clients. Beyond Black would be worth reading for its dark atmosphere alone, but it offers much more than a chilling and creepy tale. Indeed, it is extraordinarily observant as well as unsettlingly funny about a particular tranche of British middle-class life. Still, the book's unnerving nature that sticks in the mind, and reading it noticeably changed my mood for days afterwards, and not necessarily for the best.

The Wall (2019) John Lanchester The Wall tells the story of a young man called Kavanagh, one of the thousands of Defenders standing guard around a solid fortress that envelopes the British Isles. A national service of sorts, it is Kavanagh's job to stop the so-called Others getting in. Lanchester is frank about what his wall provides to those who stand guard: the Defenders of the Wall are conscripted for two years on the Wall, with no exceptions, giving everyone in society a life plan and a story. But whilst The Wall is ostensibly about a physical wall, it works even better as a story about the walls in our mind. In fact, the book blends together of some of the most important issues of our time: climate change, increasing isolation, Brexit and other widening societal divisions. If you liked P. D. James' The Children of Men you'll undoubtedly recognise much of the same intellectual atmosphere, although the sterility of John Lanchester's dystopia is definitely figurative and textual rather than literal. Despite the final chapters perhaps not living up to the world-building of the opening, The Wall features a taut and engrossing narrative, and it undoubtedly warrants even the most cursory glance at its symbolism. I've yet to read something by Lanchester I haven't enjoyed (even his short essay on cheating in sports, for example) and will be definitely reading more from him in 2022.

The Only Story (2018) Julian Barnes The Only Story is the story of Paul, a 19-year-old boy who falls in love with 42-year-old Susan, a married woman with two daughters who are about Paul's age. The book begins with how Paul meets Susan in happy (albeit complicated) circumstances, but as the story unfolds, the novel becomes significantly more tragic and moving. Whilst the story begins from the first-person perspective, midway through the book it shifts into the second person, and, later, into the third as well. Both of these narrative changes suggested to me an attempt on the part of Paul the narrator (if not Barnes himself), to distance himself emotionally from the events taking place. This effect is a lot more subtle than it sounds, however: far more prominent and devastating is the underlying and deeply moving story about the relationship ends up. Throughout this touching book, Barnes uses his mastery of language and observation to avoid the saccharine and the maudlin, and ends up with a heart-wrenching and emotive narrative. Without a doubt, this is the saddest book I read this year.

19 December 2021

Bastian Venthur: Managing dotfiles with GNU Stow

Many developers manage their user-specific application configuration also known as dotfiles in a version control system such as git. This allows for keeping track of changes and synchronizing the dotfiles across different machines. Searching on github, you ll find thousands of dotfile repositories. As your dotfiles are sprinkled all over your home directory, managing them in a single repository is not trivial, i.e. how do you make sure that your .bashrc, .tmux.conf, etc. that life in your dotfile repository appear in the proper places in your home directory? The most common solution is to use symlinks so that the .tmux.conf in your home directory is just a symlink pointing to the appropriate file in your dotfile repository:
$ ls -l ~/.tmux.conf
lrwxrwxrwx 1 venthur venthur 34 18. Dez 22:53 /home/venthur/.tmux.conf -> git/dotfiles/tmux/.tmux.conf
This leads immediately to another problem: how do you manage the symlinks? For the longest time I just manually maintained the symlinks on the various machines, but this approach does not scale well with the number of dotfiles and machines you re using this repository on. Often, people write their own shell scripts that help them with the maintenance of the symlinks, but at least the solutions I ve seen so far did not convince me. Last year I stumbled upon GNU Stow, an unpretentious little tool that does not reveal at first sight how useful it would be for the job. The description on the website says:
GNU Stow is a symlink farm manager which takes distinct packages of software and/or data located in separate directories on the filesystem, and makes them appear to be installed in the same place.
Right. How does it work? In stow s terminology, a package is a set of files and directories that need to be installed in a particular directory structure. The target directory is the root of the tree in which the package appear to be installed. When you stow a package, stow creates symlinks in the target directory that point into the package. Let s say I have my dotfiles repository in ~/git/dotfiles/. Within this repository, I have a tmux package, containing the .tmux.conf dotfile:
$ pwd
/home/venthur/git/dotfiles
$ find tmux
tmux                # the package
tmux/.tmux.conf     # the dotfile
The target directory is my home directory, as this is where the symlinks need to be created. I can now stow the tmux package into the target directory like so:
$ stow --target=/home/venthur tmux
and stow will create the appropriate symlinks to the contents of the package into the target directory:
$ ls -l ~/.tmux.conf
lrwxrwxrwx 1 venthur venthur 34  2. Jun 2021  /home/venthur/.tmux.conf -> git/dotfiles/tmux/.tmux.conf
Note that the name of the package (i.e. the name of the directory) does not matter as stow points the symlinks into the package, so you can choose it freely. I usually use the name of the program that the configuration belongs to as the package name. Your package can also contain several files or even a complex directory structure. Let s look at the configuration for neovim, which lives below ~/.config/nvim/:
$ pwd
/home/venthur/git/dotfiles
$ find neovim
neovim
neovim/.config
neovim/.config/nvim
neovim/.config/nvim/init.vim
$ stow --target=/home/venthur neovim
$ ls -l ~/.config/nvim
lrwxrwxrwx 1 venthur venthur 41  2. Jun 2021  /home/venthur/.config/nvim -> ../git/dotfiles/neovim/.config/nvim
At this point we should mention that the target directory for my dotfiles will always be my home directory, so the contents of the packages are either the configuration files or the directory structure as they live in my home directory. Deleting a package from the parent directory You can also remove (unstow) a package from the target directory again, using the --delete parameter:
$ ls -l ~/.tmux.conf
lrwxrwxrwx 1 venthur venthur 34 18. Dez 22:53 /home/venthur/.tmux.conf -> git/dotfiles/tmux/.tmux.conf
$ stow --target=/home/venthur --delete tmux/
$ ls -l ~/.tmux.conf
ls: cannot access '/home/venthur/.tmux.conf': No such file or directory
Stowing several packages at once Since your dotfile repository will likely contain more than one package, it makes sense to combine the individual stow commands into one, so instead of stowing everything individually,
$ stow --target=/home/venthur tmux
$ stow --target=/home/venthur vim
$ stow --target=/home/venthur neovim
you can stow everything at once:
$ stow --target=/home/venthur */
Note that I use */ instead of * to match all directories (i.e. packages), since my dotfiles repository also contains a README.md and a makefile. Putting it all together My dotfiles repository contains a makefile that allows me to create/update or delete all symlinks at once:
all:
        stow --verbose --target=$$HOME --restow */
delete:
        stow --verbose --target=$$HOME --delete */
The --restow parameter tells stow to unstow the packages first before stowing them again, which is useful for pruning obsolete symlinks from the target directory. Et voil ! Whenever I make a change in my dotfiles repository that involves creating or deleting a dotfile (or a package), I simply call:
$ make
and everything is updated. To delete all dotfile-related symlinks from this machine, I simply run:
$ make delete

18 September 2021

Mike Gabriel: X2Go, Remmina and X2GoKdrive

In this blog post, I will cover a few related but also different topics around X2Go - the GNU/Linux based remote computing framework. Introduction and Catch Up For those, who haven't come across X2Go, so far... With X2Go [0] you can log into remote GNU/Linux machines graphically and launch headless desktop environments, seamless/published applications or access an already running desktop session (on a local Xserver or running as a headless X2Go desktop session) via X2Go's session shadowing / mirroring feature. Graphical backend: NXv3 For several years, there was only one graphical backend available in X2Go, the NXv3 software. In NXv3, you have a headless or nested (it can do both) Xserver that has some remote magic built-in and is able to transfer the Xserver's graphical data to a remote client (NX proxy). Over the wire, the NX protocol allows for data compression (JPEG, PNG, etc.) and combines it with bitmap caching, so that the overall result is a fast and responsive desktop experience even on low latency and low bandwidth connections. This especially applies to X desktop environments that use many native X protocol operations for drawing windows and widget onto the screen. The more bitmaps involved (e.g. in applications with client-side rendering of window controls and such), the worse the quality of a session experience. The current main maintainer of NVv3 (aka nx-libs [1]) is Ulrich Sibiller. Uli has my and the X2Go community's full appreciation, admiration and gratitude for all the work he does on nx-libs, constantly improving NXv3 without breaking compatibility with legacy use cases (yes, FreeNX is still alive, by the way). NEW: Alternative Graphical Backend: X2Go Kdrive Over the past 1.5 years, Oleksandr Shneyder (Alex), co-founder of X2Go, has been working on a re-implementation of an alternative, less X11-dependent graphical backend. The underlying Xserver technology is the kdrive part of the X.org server project. People on GNU/Linux might have used kdrive technology already: The Xephyr nested Xserver uses the kdrive implementation. The idea of the X2Go Kdrive [2] implementation in X2Go is providing a headless Xserver on the X2Go Server side for running X11 based desktop sessions inside while using an X11-agnostic data protocol for sending the graphical desktop data to the client-side for rendering. Whereas, with NXv3 technology, you need a local Xserver on the client side, with X2Go Kdrive you only need a client app(lication) that can draw bitmaps into some sort of framebuffer, such as a client-side X11 Xserver, a client-side Wayland compositor or (hold your breath) an HTMLv5 canvas in a web browser. X2Go Kdrive Client Implementations During first half of this year, I tested and DEB-packaged Alex's X2Go HTMLv5 client code [3] and it has been available for testing in the X2Go nightly builds archive for a while now. Of course, the native X2Go Client application has X2Go Kdrive support for a while, too, but it requires a Qt5 application in the background, the x2gokdriveclient (which is still only available in X2Go nightly builds or from X2Go Git [4]). X2Go and Remmina As currently posted by the Remmina community [5], one of my employees has been working on finalizing an already existing draft of mine for the last couple of months: Remmina Plugin X2Go. This project has been contracted by BAUR-ITCS UG (haftungsbeschr nkt) already a while back and has been financed via X2Go funding from one of their customers. Unfortunately, I never got around really to finalizing the project. Apologies for this. Daniel Teichmann, who has been in the company for a while now, but just recently switched to an employment model with considerably more work hours per week, now picked up this project two months ago and achieved awesome things on the way. Daniel Teichmann and Antenore Gatta (Remmina core developer, aka tmow) have been cooperating intensely on this, recently, with the objective of getting the X2Go plugin code merged into Remmina asap. We are pretty close to the first touchdown (i.e. code merge) of this endeavour. Thanks to Antenore for his support on this. This is much appreciated. Remmina Plugin X2Go - Current Challenges The X2Go Plugin for Remmina implementation uses Python X2Go (PyHoca-CLI) under the bonnet and basically does a system call to pyhoca-cli according to the session settings configured in the Remmina session profile UI. When using NXv3 based sessions, the session window appears on the client-side Xserver and immediately gets caught by Remmina and embedded into the Remmina frame (via Xembed protocol) where its remote sessions are supposed to appear. (Thanks that GtkSocket is still around in GTK-3). The knowing GTK-3 experts among you may have noticed: GtkSocket is obsolete and has been removed from GTK-4. Also, GtkSocket support is only available in GTK-3 when using its X11 rendering backend. For the X2Go Kdrive implementation, we tested a similar approach (embedding the x2gokdriveclient Qt5 window via Xembed/GtkSocket), but it seems that GtkSocket and Qt5 applications don't work well together and we did not succeed in embedding the Qt5 window of the x2gokdriveclient application into Remmina, so far. Also, this would be a work-around for the bigger problem: We want, long-term, provide X2Go Kdrive support in Remmina, not only for Remmina running with GTK-3/X11, but also when Remmina is used natively on top of Wayland. So, the more sustainable approach for showing an X2Go Kdrive based X2Go session in Remmina would be a GTK-3/4 or a Glib-2.0 + Cairo based rendering client provided as a shared library. This then could be used by Remmina for drawing the session bitmaps into the Remmina session frame. This would require a port of the x2gokdriveclient Qt code into a non-Qt implementation. However, we are running out of funding to make this happen at the moment. More Funding Needed for this Journey As you might guess, such a project as proposed is a project that some people do in their spare time, others do it for a living. I'd love to continue this project and have Daniel Teichmann continue his work on this, so that Remmina might soon be able to provide native X2Go Kdrive Client support. If people read this and are interested in supporting such a project, please get in touch [6]. Thanks so much! light+love
Mike (aka sunweaver) [0] https://wiki.x2go.org/
[1] https://github.com/ArcticaProject/nx-libs
[2] https://code.x2go.org/gitweb?p=x2gokdrive.git;a=tree
[3] https://code.x2go.org/gitweb?p=x2gohtmlclient.git;a=tree
[4] https://code.x2go.org/gitweb?p=x2gokdriveclient.git;a=tree
[5] https://remmina.org/x2go/
[6] https://das-netzwerkteam.de/

15 September 2021

Ian Jackson: Get source to Debian packages only via dgit; "official" git links are beartraps

tl;dr dgit clone sourcepackage gets you the source code, as a git tree, in ./sourcepackage. cd into it and dpkg-buildpackage -uc -b. Do not use: "VCS" links on official Debian web pages like tracker.debian.org; "debcheckout"; searching Debian's gitlab (salsa.debian.org). These are good for Debian experts only. If you use Debian's "official" source git repo links you can easily build a package without Debian's patches applied.[1] This can even mean missing security patches. Or maybe it can't even be built in a normal way (or at all). OMG WTF BBQ, why? It's complicated. There is History. Debian's "most-official" centralised source repository is still the Debian Archive, which is a system based on tarballs and patches. I invented the Debian source package format in 1992/3 and it has been souped up since, but it's still tarballs and patches. This system is, of course, obsolete, now that we have modern version control systems, especially git. Maintainers of Debian packages have invented ways of using git anyway, of course. But this is not standardised. There is a bewildering array of approaches. The most common approach is to maintain git tree containing a pile of *.patch files, which are then often maintained using quilt. Yes, really, many Debian people are still using quilt, despite having git! There is machinery for converting this git tree containing a series of patches, to an "official" source package. If you don't use that machinery, and just build from git, nothing applies the patches. [1] This post was prompted by a conversation with a friend who had wanted to build a Debian package, and didn't know to use dgit. They had got the source from salsa via a link on tracker.d.o, and built .debs without Debian's patches. This not a theoretical unsoundness, but a very real practical risk. Future is not very bright In 2013 at the Debconf in Vaumarcus, Joey Hess, myself, and others, came up with a plan to try to improve this which we thought would be deployable. (Previous attempts had failed.) Crucially, this transition plan does not force change onto any of Debian's many packaging teams, nor onto people doing cross-package maintenance work. I worked on this for quite a while, and at a technical level it is a resounding success. Unfortunately there is a big limitation. At the current stage of the transition, to work at its best, this replacement scheme hopes that maintainers who update a package will use a new upload tool. The new tool fits into their existing Debian git packaging workflow and has some benefits, but it does make things more complicated rather than less (like any transition plan must, during the transitional phase). When maintainers don't use this new tool, the standardised git branch seen by users is a compatibility stub generated from the tarballs-and-patches. So it has the right contents, but useless history. The next step is to allow a maintainer to update a package without dealing with tarballs-and-patches at all. This would be massively more convenient for the maintainer, so an easy sell. And of course it links the tarballs-and-patches to the git history in a proper machine-readable way. We held a "git packaging requirements-gathering session" at the Curitiba Debconf in 2019. I think the DPL's intent was to try to get input into the git workflow design problem. The session was a great success: my existing design was able to meet nearly everyone's needs and wants. The room was obviously keen to see progress. The next stage was to deploy tag2upload. I spoke to various key people at the Debconf and afterwards in 2019 and the code has been basically ready since then. Unfortunately, deployment of tag2upload is mired in politics. It was blocked by a key team because of unfounded security concerns; positive opinions from independent security experts within Debian were disregarded. Of course it is always hard to get a team to agree to something when it's part of a transition plan which treats their systems as an obsolete setup retained for compatibility. Current status If you don't know about Debian's git packaging practices (eg, you have no idea what "patches-unapplied packaging branch without .pc directory" means), and don't want want to learn about them, you must use dgit to obtain the source of Debian packages. There is a lot more information and detailed instructions in dgit-user(7). Hopefully either the maintainer did the best thing, or, if they didn't, you won't need to inspect the history. If you are a Debian maintainer, you should use dgit push-source to do your uploads. This will make sure that users of dgit will see a reasonable git history.
edited 2021-09-15 14:48 Z to fix a typo


comment count unavailable comments

12 September 2021

Russ Allbery: DocKnot 5.00

This release is the culmination of a project that I've been wanting to do for two years, but nearly all the work was done in the past week. That experience made me rethink some of my skepticism, but I'll get to that part of the story later. In March of 1999, I got tired of writing HTML by hand and wrote a small program called spin that implemented a macro language that translated into HTML. This makes it one of the oldest programs for which I have a continuous development history, predating podlators by three months. I think only News::Gateway (now very dormant) and Term::ANSIColor (still under active development but very stable) are older, as long as I'm not counting orphaned packages like newsyslog. I've used spin continuously ever since. It's grown features and an ecosystem of somewhat hackish scripts to do web publishing things I've wanted over the years: journal entries like this one, book reviews, a simple gallery (with some now-unfortunate decisions about maximum image size), RSS feeds, and translation of lots of different input files into HTML. But the core program itself, in all those years, has been one single Perl script written mostly in my Perl coding style from the early 2000s before I read Perl Best Practices. My web site is long overdue for an overhaul. Just to name a couple of obvious problems, it looks like trash on mobile browsers, and I'm using URL syntax from the early days of the web that, while it prompts some nostalgia for tildes, means all the URLs are annoyingly long and embed useless information such as the fact each page is written in HTML. Its internals also use a lot of ad hoc microformats (a bit of RFC 2822 here, a text-based format with significant indentation there, a weird space-separated database) and are supported by programs that extract meaning from human-written pages and perform automated updates to them rather than having a clear separation between structure and data. This will be a very large project, but it's the sort of quixotic personal project that I enjoy. Maintaining my own idiosyncratic static site generator is almost certainly not an efficient use of my time compared to, say, converting everything to Hugo. But I have 3,428 pages (currently) written in the thread macro language, plus numerous customizations that cater to my personal taste and interests, and, most importantly, I like having a highly customized system that I know exactly how to automate. The blocker has been that I didn't want to work on spin as it existed. It badly needed a structural overhaul and modernization, and even more badly needed a test suite since every release involved tedious manual testing by pouring over diffs between generations of the web site. And that was enough work to be intimidating, so I kept putting it off. I've separately been vaguely aware that I have been spending too much time reading Twitter (specifically) and the news (in general). It would be one thing if I were taking in that information to do something productive about it, but I haven't been. It's just doomscrolling. I've been thinking about taking a break for a while but it kept not sticking, so I decided to make a concerted effort this week. It took about four days to stop wanting to check Twitter and forcing myself to go do something else productive or at least play a game instead. Then I managed to get started on my giant refactoring project, and holy shit, Twitter has been bad for my attention span! I haven't been able to sustain this level of concentration for hours at a time in years. Twitter's not the only thing to blame (there are a few other stressers that I've fixed in the past couple of years), but it's obviously a huge part. Anyway, this long personal ramble is prelude to the first release of DocKnot that includes my static site generator. This is not yet the full tooling from my old web tools page; specifically, it's missing faq2html, cl2xhtml, and cvs2xhtml. (faq2html will get similar modernization treatment, cvs2xhtml will probably be rewritten in Perl since I have some old, obsolete scripts that may live in CVS forever, and I may retire cl2xhtml since I've stopped using the GNU ChangeLog format entirely.) But DocKnot now contains the core of my site generation system, including the thread macro language, POD conversion (by way of Pod::Thread), and RSS feeds. Will anyone else ever use this? I have no idea; realistically, probably not. If you were starting from scratch, I'm sure you'd be better off with one of the larger and more mature static site generators that's not the idiosyncratic personal project of one individual. It is packaged for Debian because it's part of the tool chain for generating files (specifically README.md) that are included in every package I maintain, and thus is part of the transitive closure of Debian main, but I'm not sure anyone will install it from there for any other purpose. But for once making something for someone else isn't the point. This is my quirky, individual way to maintain web sites that originated in an older era of the web and that I plan to keep up-to-date (I'm long overdue to figure out what they did to HTML after abandoning the XHTML approach) because it brings me joy to do things this way. In addition to adding the static site generator, this release also has the regular sorts of bug fixes and minor improvements: better formatting of software pages for software that's packaged for Debian, not assuming every package has a TODO file, and ignoring Autoconf 2.71 backup files when generating distribution tarballs. You can get the latest version of DocKnot from CPAN as App-DocKnot, or from its distribution page. I know I haven't yet updated my web tools page to reflect this move, or changed the URL in the footer of all of my pages. This transition will be a process over the next few months and will probably prompt several more minor releases.

1 August 2021

Paul Wise: FLOSS Activities July 2021

Focus This month I didn't have any particular focus. I just worked on issues in my info bubble.

Changes

Issues

Review

Administration
  • libusbgx/gt: triage issues
  • Debian packages: triaged bugs for reintroduced packages
  • Debian servers: debug lists mail issue, debug lists subscription issue
  • Debian wiki: unblock IP addresses, approve accounts

Communication
  • Respond to queries from Debian users and contributors on the mailing lists and IRC

Sponsors The microsoft-authentication-library-for-python and purple-discord work was sponsored by my employer. All other work was done on a volunteer basis.

14 June 2021

Fran ois Marier: Self-hosting an Ikiwiki blog

8.5 years ago, I moved my blog to Ikiwiki and Branchable. It's now time for me to take the next step and host my blog on my own server. This is how I migrated from Branchable to my own Apache server.

Installing Ikiwiki dependencies Here are all of the extra Debian packages I had to install on my server:
apt install ikiwiki ikiwiki-hosting-common gcc libauthen-passphrase-perl libcgi-formbuilder-perl libcrypt-sslauthen-passphrase-perl libcgi-formbuilder-perl libcrypt-ssleay-perl libjson-xs-perl librpc-xml-perl python-docutils libxml-feed-perl libsearch-xapian-perl libmailtools-perl highlight-common libsearch-xapian-perl xapian-omega
apt install --no-install-recommends ikiwiki-hosting-web libgravatar-url-perl libmail-sendmail-perl libcgi-session-perl
apt purge libnet-openid-consumer-perl
Then I enabled the CGI module in Apache:
a2enmod cgi
and un-commented the following in /etc/apache2/mods-available/mime.conf:
AddHandler cgi-script .cgi

Creating a separate user account Since Ikiwiki needs to regenerate my blog whenever a new article is pushed to the git repo or a comment is accepted, I created a restricted user account for it:
adduser blog
adduser blog sshuser
chsh -s /usr/bin/git-shell blog

git setup Thanks to Branchable storing blogs in git repositories, I was able to import my blog using a simple git clone in /home/blog (the srcdir):
git clone --bare git://feedingthecloud.branchable.com/ source.git
Note that the name of the directory (source.git) is important for the ikiwikihosting plugin to work. Then I pulled the .setup file out of the setup branch in that repo and put it in /home/blog/.ikiwiki/FeedingTheCloud.setup. After that, I deleted the setup branch and the origin remote from that clone:
git branch -d setup
git remote rm origin
Following the recommended git configuration, I created a working directory (the repository) for the blog user to modify the blog as needed:
cd /home/blog/
git clone /home/blog/source.git FeedingTheCloud
I added my own ssh public key to /home/blog/.ssh/authorized_keys so that I could push to the srcdir from my laptop. Finaly, I generated a new ssh key without a passphrase:
ssh-keygen -t ed25519
and added it as deploy key to the GitHub repo which acts as a read-only mirror of my blog.

Ikiwiki config While I started with the Branchable setup file, I changed the following things in it:
adminemail: webmaster@fmarier.org
srcdir: /home/blog/FeedingTheCloud
destdir: /var/www/blog
url: https://feeding.cloud.geek.nz
cgiurl: https://feeding.cloud.geek.nz/blog.cgi
cgi_wrapper: /var/www/blog/blog.cgi
cgi_wrappermode: 675
add_plugins:
- goodstuff
- lockedit
- comments
- blogspam
- sidebar
- attachment
- favicon
- format
- highlight
- search
- theme
- moderatedcomments
- flattr
- calendar
- headinganchors
- notifyemail
- anonok
- autoindex
- date
- relativedate
- htmlbalance
- pagestats
- sortnaturally
- ikiwikihosting
- gitpush
- emailauth
disable_plugins:
- brokenlinks
- fortune
- more
- openid
- orphans
- passwordauth
- progress
- recentchanges
- repolist
- toggle
- txt
sslcookie: 1
cookiejar:
  file: /home/blog/.ikiwiki/cookies
useragent: ikiwiki
git_wrapper: /home/blog/source.git/hooks/post-update
urlalias:
- http://feeds.cloud.geek.nz/
- http://www.feeding.cloud.geek.nz/
owner: francois@fmarier.org
hostname: feeding.cloud.geek.nz
emailauth_sender: login@fmarier.org
allowed_attachments: admin()
Then I created the destdir:
mkdir /var/www/blog
chown blog:blog /var/www/blog
and generated the initial copy of the blog as the blog user:
ikiwiki --setup .ikiwiki/FeedingTheCloud.setup --wrappers --rebuild
One thing that failed to generate properly was the tag cloug (from the pagestats plugin). I have not been able to figure out why it fails to generate any output when run this way, but if I push to the repo and let the git hook handle the rebuilding of the wiki, the tag cloud is generated correctly. Consequently, fixing this is not high on my list of priorities, but if you happen to know what the problem is, please reach out.

Apache config Here's the Apache config I put in /etc/apache2/sites-available/blog.conf:
<VirtualHost *:443>
    ServerName feeding.cloud.geek.nz
    SSLEngine On
    SSLCertificateFile /etc/letsencrypt/live/feeding.cloud.geek.nz/fullchain.pem
    SSLCertificateKeyFile /etc/letsencrypt/live/feeding.cloud.geek.nz/privkey.pem
    Header set Strict-Transport-Security: "max-age=63072000; includeSubDomains; preload"
    Include /etc/fmarier-org/blog-common
</VirtualHost>
<VirtualHost *:443>
    ServerName www.feeding.cloud.geek.nz
    ServerAlias feeds.cloud.geek.nz
    SSLEngine On
    SSLCertificateFile /etc/letsencrypt/live/feeding.cloud.geek.nz/fullchain.pem
    SSLCertificateKeyFile /etc/letsencrypt/live/feeding.cloud.geek.nz/privkey.pem
    Redirect permanent / https://feeding.cloud.geek.nz/
</VirtualHost>
<VirtualHost *:80>
    ServerName feeding.cloud.geek.nz
    ServerAlias www.feeding.cloud.geek.nz
    ServerAlias feeds.cloud.geek.nz
    Redirect permanent / https://feeding.cloud.geek.nz/
</VirtualHost>
and the common config I put in /etc/fmarier-org/blog-common:
ServerAdmin webmaster@fmarier.org
DocumentRoot /var/www/blog
LogLevel core:info
CustomLog $ APACHE_LOG_DIR /blog-access.log combined
ErrorLog $ APACHE_LOG_DIR /blog-error.log
AddType application/rss+xml .rss
<Location /blog.cgi>
        Options +ExecCGI
</Location>
before enabling all of this using:
a2ensite blog
apache2ctl configtest
systemctl restart apache2.service
The feeds.cloud.geek.nz domain used to be pointing to Feedburner and so I need to maintain it in order to avoid breaking RSS feeds from folks who added my blog to their reader a long time ago.

Server-side improvements Since I'm now in control of the server configuration, I was able to make several improvements to how my blog is served. First of all, I enabled the HTTP/2 and Brotli modules:
a2enmod http2
a2enmod brotli
and enabled Brotli compression by putting the following in /etc/apache2/conf-available/compression.conf:
<IfModule mod_brotli.c>
  <IfDefine !TRANSFER_COMPRESSION>
    Define TRANSFER_COMPRESSION BROTLI_COMPRESS
  </IfDefine>
</IfModule>
<IfModule mod_deflate.c>
  <IfDefine !TRANSFER_COMPRESSION>
    Define TRANSFER_COMPRESSION DEFLATE
  </IfDefine>
</IfModule>
<IfDefine TRANSFER_COMPRESSION>
  <IfModule mod_filter.c>
    AddOutputFilterByType $ TRANSFER_COMPRESSION  text/html text/plain text/xml text/css text/javascript
    AddOutputFilterByType $ TRANSFER_COMPRESSION  application/x-javascript application/javascript application/ecmascript
    AddOutputFilterByType $ TRANSFER_COMPRESSION  application/rss+xml
    AddOutputFilterByType $ TRANSFER_COMPRESSION  application/xml
  </IfModule>
</IfDefine>
and replacing /etc/apache2/mods-available/deflate.conf with the following:
# Moved to /etc/apache2/conf-available/compression.conf as per https://bugs.debian.org/972632
before enabling this new config:
a2enconf compression
Next, I made my blog available as a Tor onion service by putting the following in /etc/apache2/sites-available/blog.conf:
<VirtualHost *:443>
    ServerName feeding.cloud.geek.nz
    ServerAlias xfdug5vmfi6oh42fp6ahhrqdjcf7ysqat6fkp5dhvde4d7vlkqixrsad.onion
    Header set Onion-Location "http://xfdug5vmfi6oh42fp6ahhrqdjcf7ysqat6fkp5dhvde4d7vlkqixrsad.onion% REQUEST_URI s"
    Header set alt-svc 'h2="xfdug5vmfi6oh42fp6ahhrqdjcf7ysqat6fkp5dhvde4d7vlkqixrsad.onion:443"; ma=315360000; persist=1'
    ... 
<VirtualHost *:80>
    ServerName xfdug5vmfi6oh42fp6ahhrqdjcf7ysqat6fkp5dhvde4d7vlkqixrsad.onion
    Include /etc/fmarier-org/blog-common
</VirtualHost>
Then I followed the Mozilla Observatory recommendations and enabled the following security headers:
Header set Content-Security-Policy: "default-src 'none'; report-uri https://fmarier.report-uri.com/r/d/csp/enforce ; style-src 'self' 'unsafe-inline' ; img-src 'self' https://seccdn.libravatar.org/ ; script-src https://feeding.cloud.geek.nz/ikiwiki/ https://xfdug5vmfi6oh42fp6ahhrqdjcf7ysqat6fkp5dhvde4d7vlkqixrsad.onion/ikiwiki/ http://xfdug5vmfi6oh42fp6ahhrqdjcf7ysqat6fkp5dhvde4d7vlkqixrsad.onion/ikiwiki/ 'unsafe-inline' 'sha256-pA8FbKo4pYLWPDH2YMPqcPMBzbjH/RYj0HlNAHYoYT0=' 'sha256-Kn5E/7OLXYSq+EKMhEBGJMyU6bREA9E8Av9FjqbpGKk=' 'sha256-/BTNlczeBxXOoPvhwvE1ftmxwg9z+WIBJtpk3qe7Pqo=' ; base-uri 'self'; form-action 'self' ; frame-ancestors 'self'"
Header set X-Frame-Options: "SAMEORIGIN"
Header set Referrer-Policy: "same-origin"
Header set X-Content-Type-Options: "nosniff"
Note that the Mozilla Observatory is mistakenly identifying HTTP onion services as insecure, so you can ignore that failure. I also used the Mozilla TLS config generator to improve the TLS config for my server. Then I added security.txt and gpc.json to the root of my git repo and then added the following aliases to put these files in the right place:
Alias /.well-known/gpc.json /var/www/blog/gpc.json
Alias /.well-known/security.txt /var/www/blog/security.txt
I also followed these instructions to create a sitemap for my blog with the following alias:
Alias /sitemap.xml /var/www/blog/sitemap/index.rss
Finally, I simplified a few error pages to save bandwidth:
ErrorDocument 301 " "
ErrorDocument 302 " "
ErrorDocument 404 "Not Found"

Monitoring 404s Another advantage of running my own web server is that I can monitor the 404s easily using logcheck by putting the following in /etc/logcheck/logcheck.logfiles:
/var/log/apache2/blog-error.log 
Based on that, I added a few redirects to point bots and users to the location of my RSS feed:
Redirect permanent /atom /index.atom
Redirect permanent /comments.rss /comments/index.rss
Redirect permanent /comments.atom /comments/index.atom
Redirect permanent /FeedingTheCloud /index.rss
Redirect permanent /feed /index.rss
Redirect permanent /feed/ /index.rss
Redirect permanent /feeds/posts/default /index.rss
Redirect permanent /rss /index.rss
Redirect permanent /rss/ /index.rss
and to tell them to stop trying to fetch obsolete resources:
Redirect gone /~ff/FeedingTheCloud
Redirect gone /gittip_button.png
Redirect gone /ikiwiki.cgi
I also used these 404s to discover a few old Feedburner URLs that I could redirect to the right place using archive.org:
Redirect permanent /feeds/1572545745827565861/comments/default /posts/watch-all-of-your-logs-using-monkeytail/comments.atom
Redirect permanent /feeds/1582328597404141220/comments/default /posts/news-feeds-rssatom-for-mythtvorg-and/comments.atom
...
Redirect permanent /feeds/8490436852808833136/comments/default /posts/recovering-lost-git-commits/comments.atom
Redirect permanent /feeds/963415010433858516/comments/default /posts/debugging-openwrt-routers-by-shipping/comments.atom
I also put the following robots.txt in the git repo in order to stop a bunch of authentication errors coming from crawlers:
User-agent: *
Disallow: /blog.cgi
Disallow: /ikiwiki.cgi

Future improvements There are a few things I'd like to improve on my current setup. The first one is to remove the iwikihosting and gitpush plugins and replace them with a small script which would simply git push to the read-only GitHub mirror. Then I could uninstall the ikiwiki-hosting-common and ikiwiki-hosting-web since that's all I use them for. Next, I would like to have proper support for signed git pushes. At the moment, I have the following in /home/blog/source.git/config:
[receive]
    advertisePushOptions = true
    certNonceSeed = "(random string)"
but I'd like to also reject unsigned pushes. While my blog now has a CSP policy which doesn't rely on unsafe-inline for scripts, it does rely on unsafe-inline for stylesheets. I tried to remove this but the actual calls to allow seemed to be located deep within jQuery and so I gave up. Update: now fixed. Finally, I'd like to figure out a good way to deal with articles which don't currently have comments. At the moment, if you try to subscribe to their comment feed, it returns a 404. For example:
[Sun Jun 06 17:43:12.336350 2021] [core:info] [pid 30591:tid 140253834704640] [client 66.249.66.70:57381] AH00128: File does not exist: /var/www/blog/posts/using-iptables-with-network-manager/comments.atom
This is obviously not ideal since many feed readers will refuse to add a feed which is currently not found even though it could become real in the future. If you know of a way to fix this, please let me know.

1 June 2021

Paul Wise: FLOSS Activities May 2021

Focus This month I didn't have any particular focus. I just worked on issues in my info bubble.

Changes

Issues

Review

Administration
  • Debian wiki: unblock IP addresses, approve accounts

Communication
  • Joined the great IRC migration
  • Respond to queries from Debian users and contributors on the mailing lists and IRC

Sponsors The purple-discord, sptag and esprima-python work was sponsored by my employer. All other work was done on a volunteer basis.

30 April 2021

Paul Wise: FLOSS Activities April 2021

Focus This month I didn't have any particular focus. I just worked on issues in my info bubble.

Changes

Issues

Review

Administration
  • Debian: restart service killed by OOM killer, revert mirror redirection
  • Debian wiki: unblock IP addresses, approve accounts

Communication

Sponsors The flower/sptag work was sponsored by my employer. All other work was done on a volunteer basis.

1 April 2021

Utkarsh Gupta: FOSS Activites in March 2021

Here s my (eighteenth) monthly update about the activities I ve done in the F/L/OSS world.

Debian
This was my 27th month of active contributing to Debian. I became a DM in late March 2019 and a DD on Christmas 19! \o/ This month was a bit exhausting; lots of moving parts. With the financial year ending, it was even more crazy, with me running around to banks, CA, et al.
Anyway, with now working on Ubuntu full-time, I did little of Debian this month. Here are the following things I worked on:

Uploads and bug fixes:

Other $things:
  • Attended the Debian LTS team meeting.
  • Mentoring for newcomers.
  • Moderation of -project mailing list.

Debian (E)LTS
Debian Long Term Support (LTS) is a project to extend the lifetime of all Debian stable releases to (at least) 5 years. Debian LTS is not handled by the Debian security team, but by a separate group of volunteers and companies interested in making it a success. And Debian Extended LTS (ELTS) is its sister project, extending support to the Jessie release (+2 years after LTS support). This was my eighteenth month as a Debian LTS and ninth month as a Debian ELTS paid contributor.
I was assigned 60.00 hours for LTS and 39.00 hours for ELTS and worked on the following things:

LTS CVE Fixes and Announcements:

ELTS CVE Fixes and Announcements:

Other (E)LTS Work:
  • Front-desk duty from 01-03 until 07-03 for ELTS and then from 29-03 until 04-04 for both LTS and ELTS.
  • Triaged wpa, python-aiohttp, spip, wpa, qemu, tomcat7, tomcat8, grub2, mupdf, openssh, tiff, spice, pillow, xmlgraphics-commons, batik, libupnp, ca-certificates, salt, squid3, shibboleth-sp2, courier-authlib, cloud-init, spamassassin, openssl, libcaca, and openjpeg2.
  • Marked CVE-2021-21330/python-aiohttp as not-affected for stretch.
  • Marked CVE-2021-20233, CVE-2021-20225, CVE-2020-27779, CVE-2020-27778, CVE-2020-27749, CVE-2020-27748, CVE-2020-25647, CVE-2020-25632, CVE-2020-25631, and CVE-2020-14372, affecting grub2, as ignored for stretch and jessie.
  • Marked CVE-2020-27842/openjpeg2 as no-dsa for jessie.
  • Marked CVE-2020-27843/openjpeg2 as no-dsa for jessie.
  • Marked CVE-2021-28041/openssh as not-affect for jessie.
  • Marked CVE-2020-3552 3,4 /tiff as no-dsa for jessie.
  • Marked CVE-2021-20201/spice as no-dsa for jessie.
  • Marked CVE-2020-11988/xmlgraphics-commons as postponed for jessie.
  • Marked CVE-2020-11987/batik as postponed for jessie.
  • Marked CVE-2020-12695/libupnp as no-dsa for stretch.
  • Marked CVE-2021-25122/tomcat7 as not-affected for stretch.
  • Marked CVE-2021-25329/tomcat7 as ignored for stretch.
  • Marked CVE-2021-28116/squid3 as postponed for stretch and jessie.
  • Marked CVE-2021-3449/openssl as not-affected for stretch.
  • Document extra notes for grub2 for LTS and co-ordinate with the sec-team.
  • Document extra notes for pillow about piled-up issues in jessie.
  • Issued DLA-2593-1 for ca-certificates on Microsoft s request; co-ordinating w/ them.
  • Co-ordinating w/ maintainer of courier-authlib for stretch and jessie update.
  • Fixing build failures of ELTS security tracker and re-ordering entries in data/CVE-EXTENDED-LTS/list file.
  • Answer queries of dupondje and mikap about openssl on IRC; and it being not-affected for stretch.
  • Help review the status of CVE-2021-3121/golang-github-gogo-protobuf-dev for Ola.
  • Co-ordinating w/ Noah for cloud-init and setuptools.
  • Auto EOL ed mongodb, linux, guacamole-client, node-xmlhttprequest, newlib, neutron, privoxy, glpi, and zabbix for jessie.
  • Attended monthly meeting for Debian LTS.
  • Answered questions (& discussions) on IRC (#debian-lts and #debian-elts).
  • General and other discussions on LTS private and public mailing list.

Until next time.
:wq for today.

31 March 2021

Timo Jyrinki: MotionPhoto / MicroVideo File Formats on Pixel Phones

Google Pixel phones support what they call Motion Photo which is essentially a photo with a short video clip attached to it. They are quite nice since they bring the moment alive, especially as the capturing of the video starts a small moment before the shutter button is pressed. For most viewing programs they simply show as static JPEG photos, but there is more to the files.
I d really love proper Shotwell support for these file formats, so I posted a longish explanation with many of the details in this blog post to a ticket there too. Examples of the newer format are linked there too.
Info posted to Shotwell ticket

There are actually two different formats, an old one that is already obsolete, and a newer current format. The older ones are those that your Pixel phone recorded as MVIMG_[datetime].jpg", and they have the following meta-data:
Xmp.GCamera.MicroVideo                       XmpText     1  1
Xmp.GCamera.MicroVideoVersion XmpText 1 1
Xmp.GCamera.MicroVideoOffset XmpText 7 4022143
Xmp.GCamera.MicroVideoPresentationTimestampUs XmpText 7 1331607
The offset is actually from the end of the file, so one needs to calculate accordingly. But it is exact otherwise, so one simply extract a file with that meta-data information:
#!/bin/bash
#
# Extracts the microvideo from a MVIMG_*.jpg file

# The offset is from the ending of the file, so calculate accordingly
offset=$(exiv2 -p X "$1" grep MicroVideoOffset sed 's/.*\"\(.*\)"/\1/')
filesize=$(du --apparent-size --block=1 "$1" sed 's/^\([0-9]*\).*/\1/')
extractposition=$(expr $filesize - $offset)
echo offset: $offset
echo filesize: $filesize
echo extractposition=$extractposition
dd if="$1" skip=1 bs=$extractposition of="$(basename -s .jpg $1).mp4"
The newer format is recorded in filenames called PXL_[datetime].MP.jpg , and they have a _lot_ of additional metadata:
Xmp.GCamera.MotionPhoto                      XmpText     1  1
Xmp.GCamera.MotionPhotoVersion XmpText 1 1
Xmp.GCamera.MotionPhotoPresentationTimestampUs XmpText 6 233320
Xmp.xmpNote.HasExtendedXMP XmpText 32 E1F7505D2DD64EA6948D2047449F0FFA
Xmp.Container.Directory XmpText 0 type="Seq"
Xmp.Container.Directory[1] XmpText 0 type="Struct"
Xmp.Container.Directory[1]/Container:Item XmpText 0 type="Struct"
Xmp.Container.Directory[1]/Container:Item/Item:Mime XmpText 10 image/jpeg
Xmp.Container.Directory[1]/Container:Item/Item:Semantic XmpText 7 Primary
Xmp.Container.Directory[1]/Container:Item/Item:Length XmpText 1 0
Xmp.Container.Directory[1]/Container:Item/Item:Padding XmpText 1 0
Xmp.Container.Directory[2] XmpText 0 type="Struct"
Xmp.Container.Directory[2]/Container:Item XmpText 0 type="Struct"
Xmp.Container.Directory[2]/Container:Item/Item:Mime XmpText 9 video/mp4
Xmp.Container.Directory[2]/Container:Item/Item:Semantic XmpText 11 MotionPhoto
Xmp.Container.Directory[2]/Container:Item/Item:Length XmpText 7 1679555
Xmp.Container.Directory[2]/Container:Item/Item:Padding XmpText 1 0
Sounds like fun and lots of information. However I didn t see why the length in first item is 0 and I didn t see how to use the latter Length info. But I can use the mp4 headers to extract it:
#!/bin/bash
#
# Extracts the motion part of a MotionPhoto file PXL_*.MP.mp4

extractposition=$(grep --binary --byte-offset --only-matching --text \
-P "\x00\x00\x00\x18\x66\x74\x79\x70\x6d\x70\x34\x32" $1 sed 's/^\([0-9]*\).*/\1/')

dd if="$1" skip=1 bs=$extractposition of="$(basename -s .jpg $1).mp4"
UPDATE: I wrote most of this blog post earlier. When now actually getting to publishing it a week later, I see the obvious ie the Length is again simply the offset from the end of the file so one could do the same less brute force approach as for MVIMG. I ll leave the above as is however for the of binary grepping.(cross-posted to my other blog)

29 March 2021

Russ Allbery: Review: JavaScript: The Definitive Guide

Review: JavaScript: The Definitive Guide, by David Flanagan
Publisher: O'Reilly
Copyright: May 2020
ISBN: 1-4919-5202-4
Format: Trade paperback
Pages: 665
JavaScript: The Definitive Guide has been frequently revised for new versions of JavaScript and therefore has multiple editions. This review is of the seventh edition, first published in May of 2020. Reviews of programming language books are challenging since people learn languages in different ways. A short calibration for my preferences may therefore be useful. I'm both an experienced programmer in multiple languages (C, Perl, Python, and some Java and Ruby professionally; Rust, some PHP, and a few minor languages as a hobby) and I specialized in software theory in college. I therefore like to learn languages comparatively and am comfortable with a lot of up-front syntax and discussion of the unique properties of the language. Introductory programs and practical exercises doesn't matter as much to me; I'm happy to hold the syntax in my head until enough of the language has been introduced to write simple programs. For me, this book is excellent. It's one of the best language manuals that I've read, and that requires some work because JavaScript is a sprawling mess with odd corners, deprecated features, and alternate implementations of core constructs. Flanagan takes the syntax-first, comprehensive approach that I prefer, working methodically through the language (defining your own functions aren't introduced until chapter eight) and discussing all of the quirks as he goes. I felt like I thoroughly understood each portion of the language before moving on. And this book is tight. Some comprehensive language introductions sprawl, but the benefit of seven editions of iteration is a book that has been honed to the most direct and effective explanation of each concept. The section on type conversions with operators, for example, was so good that I was able to immediately understand the unintuitive result of [1] + 2 (the string '12'), despite this being one of the most confusing parts of the language. The sections on JavaScript's prototype-based object type system and its three concurrency models (callbacks, promises, and async/await) were equally good. I came away feeling like I not only understood promises and callback chains but had a feel for how the same code would look when written in the different systems. The drawback in this approach is that if you instead want a language reference that only tells you the parts of the language that you should use and leaves out the legacy weirdness and obscure corners for later (or never), this may not be the book for you. Flanagan labels the obsolete constructs, but he's meticulous about explaining the entire language, including such things as new Boolean or var variables that no one should use. This is what I wanted; I prefer to have a thorough grounding in language primitives so that it doesn't surprise me. But it can be a lot to juggle and prune in your head. JavaScript is a language used in some very different domains. The approach Flanagan takes to that is to spend as long as possible on the core language that's usable both in the browser and on the server (while marking the pieces, such as the module system, that are markedly different between Node and browsers). He then puts two monster chapters at the back of the book that cover JavaScript in web browsers and JavaScript as implemented by Node. Both are more of overviews than orientations, since a comprehensive manual for either is probably as long again as this book, but they were more than adequate for my purposes. (I bogged down a bit in the web browser chapter, in part because I didn't have an immediate use for most of the material.) Flanagan wisely defers to MDN as the reference manual for the JavaScript APIs available in web browsers. I thought Flanagan also hit the right balance of explanation to examples, and did a good job controlling the length of the examples. Most of the code excerpts are short and to the point. The longer ones have a high level of explanatory power per line, since Flanagan uses them to pull together multiple concepts and show how they interact. I was particularly impressed with the example that closes the chapter on web browsers, which uses <canvas>, ImageData, generators, promises, web workers, and other areas of the language Flanagan previously explained to implement a Mandelbrot set explorer in eleven pages of code. I think that's the longest example in the book, and it's well worth it. This sort of introduction will always have limitations. Flanagan provides a brief orientation to the ecosystem surrounding JavaScript in the last chapter, but most JavaScript programmers will be working with packaging tools and frameworks that could themselves be the topic of another book and that he doesn't have room to cover. JavaScript, even more than most languages, is commonly used via a heavy layer of supporting libraries and abstractions, so you will probably not be able to tackle a practical JavaScript project using solely the material in this book. But if you're the sort of programmer who wants to start with a solid syntactical and conceptual understanding of the language core before starting on more applied topics, I've rarely seen it done better than this book. If you want a quick-start guide that will get you writing code quickly and is opinionated about what parts of the language you should learn, this may not be the book for you. But if you're comfortable with comprehensive detail in your language guides, this was exactly what I was looking for. Recommended. Rating: 9 out of 10

18 February 2021

Julian Andres Klode: APT 2.2 released

APT 2.2.0 marks the freeze of the 2.1 development series and the start of the 2.2 stable series. Let s have a look at what changed compared to 2.2. Many of you who run Debian testing or unstable, or Ubuntu groovy or hirsute will already have seen most of those changes.

New features
  • Various patterns related to dependencies, such as ?depends are now available (2.1.16)
  • The Protected field is now supported. It replaces the previous Important field and is like Essential, but only for installed packages (some minor more differences maybe in terms of ordering the installs).
  • The update command has gained an --error-on=any option that makes it error out on any failure, not just what it considers persistent ons.
  • The rred method can now be used as a standalone program to merge pdiff files
  • APT now implements phased updates. Phasing is used in Ubuntu to slow down and control the roll out of updates in the -updates pocket, but has previously only been available to desktop users using update-manager.

Other behavioral changes
  • The kernel autoremoval helper code has been rewritten from shell in C++ and now runs at run-time, rather than at kernel install time, in order to correctly protect the kernel that is running now, rather than the kernel that was running when we were installing the newest one. It also now protects only up to 3 kernels, instead of up to 4, as was originally intended, and was the case before 1.1 series. This avoids /boot partitions from running out of space, especially on Ubuntu which has boot partitions sized for the original spec.

Performance improvements
  • The cache is now hashed using XXH3 instead of Adler32 (or CRC32c on SSE4.2 platforms)
  • The hash table size has been increased

Bug fixes
  • * wildcards work normally again (since 2.1.0)
  • The cache file now includes all translation files in /var/lib/apt/lists, so multi-user systems with different locales correctly show translated descriptions now.
  • URLs are no longer dequoted on redirects only to be requoted again, fixing some redirects where servers did not expect different quoting.
  • Immediate configuration is now best-effort, and failure is no longer fatal.
  • various changes to solver marking leading to different/better results in some cases (since 2.1.0)
  • The lower level I/O bits of the HTTP method have been rewritten to hopefully improve stability
  • The HTTP method no longer infinitely retries downloads on some connection errors
  • The pkgnames command no longer accidentally includes source packages
  • Various fixes from fuzzing efforts by David

Security fixes
  • Out-of-bound reads in ar and tar implementations (CVE-2020-3810, 2.1.2)
  • Integer overflows in ar and tar (CVE-2020-27350, 2.1.13)
(all of which have been backported to all stable series, back all the way to 1.0.9.8.* series in jessie eLTS)

Incompatibilities
  • N/A - there were no breaking changes in apt 2.2 that we are aware of.

Deprecations
  • apt-key(1) is scheduled to be removed for Q2/2022, and several new warnings have been added. apt-key was made obsolete in version 0.7.25.1, released in January 2010, by /etc/apt/trusted.gpg.d becoming a supported place to drop additional keyring files, and was since then only intended for deleting keys in the legacy trusted.gpg keyring. Please manage files in trusted.gpg.d yourself; or place them in a different location such as /etc/apt/keyrings (or make up your own, there s no standard location) or /usr/share/keyrings, and use signed-by in the sources.list.d files. The legacy trusted.gpg keyring still works, but will also stop working eventually. Please make sure you have all your keys in trusted.gpg.d. Warnings might be added in the upcoming months when a signature could not be verified using just trusted.gpg.d. Future versions of APT might switch away from GPG.
  • As a reminder, regular expressions and wildcards other than * inside package names are deprecated (since 2.0). They are not available anymore in apt(8), and will be removed for safety reasons in apt-get in a later release.

17 December 2020

Bits from Debian: The Debian web updates its homepage and prepares for a major renewal

Today, the Debian website displays a new homepage. Since the most recent web team sprint in March 2019, we have been working on renewing the structure, content, layout and scripts that build the site. There has been work mainly in two areas: removing or updating obsolete content, and creating a new homepage which is more attractive to newcomers, and which also highlights the social aspect of the Debian project in addition to the operating system we develop. Debian website: part of the old homepage (back) and the new one (front) Although this took longer than we would have liked, and we don't consider this new homepage final, we think it's a good first step towards a much better web site. The web team will continue to work on restructuring the Debian website. We would like to appeal to the community for help, and are also considering external assistance, since we're a small group, whose members are also involved in other Debian teams. Some of the next steps we expect to walk are improve the CSS, icons, and layout in general, and review of the content, to have a better structure. If you would like to help, contact us. You can reply to the version of this article (with some more details) published in our public mailing list or chat with us in the #debian-www IRC channel (at irc.debian.org).

11 December 2020

Markus Koschany: My Free Software Activities in November 2020

Welcome to gambaru.de. Here is my monthly report (+ the first week in December) that covers what I have been doing for Debian. If you re interested in Java, Games and LTS topics, this might be interesting for you. Debian Games
Debian Java Misc Debian LTS This was my 57. month as a paid contributor and I have been paid to work 12 hours on Debian LTS, a project started by Rapha l Hertzog. In that time I did the following: ELTS Extended Long Term Support (ELTS) is a project led by Freexian to further extend the lifetime of Debian releases. It is not an official Debian project but all Debian users benefit from it without cost. The current ELTS release is Debian 8 Jessie . This was my 30. month and I have been paid to work 15 hours on ELTS. Thanks for reading and see you next time.

3 December 2020

Bdale Garbee: Shifting Emphasis

I joined the Debian project in late 1994, well before the first stable release was issued, and have been involved in various ways continuously ever since. Over the years, I adopted a number of packages that are, or at least were at one time, fundamental to the distribution. But, not surprisingly, my interests have shifted over time. In the more than quarter century I've contributed to Debian, I've adopted existing packages that needed attention, packaged new software I wanted to use that wasn't yet in Debian, offered packages up for others to adopt, and even sometimes requested the removal of packages that became obsolete or replaced by something better. That all felt completely healthy. But over the last couple weeks, I realized I'm still "responsible" for some packages I'd had for a very long time, that generally work well but over time have accumulated bugs in functionality I just don't use, and frankly haven't been able to find the motivation to chase down. As one example, I just noticed that I first uploaded the gzip package 25 years ago today, on 2 December 1995. And while the package works fine for me and most other folks, there are 30 outstanding bugs and 3 forwarded bugs that I just can't muster up any energy to address. So, I just added gzip to a short list of packages I've offered up for adoption recently. I'm pleased that tar already has a new maintainer, and hope that both sudo and gzip will get more attention soon. It's not that I'm less interested in Debian. I've just been busy recently packaging up more software I use or want to use in designing high power model rockets and the solid propellant motors I fly in them, and would rather spend the time I have available for Debian maintaining those packages and all their various build dependencies than continuing to be responsible for core packages in the distribution that "work fine for me" but could use attention. I'm writing about this partly to mark the passing of more than a quarter century as a package maintainer for Debian, partly to encourage other Debian package maintainers with the right skills and motivation to consider adopting some of the packages I'm giving up, and finally to encourage other long-time participants in Debian to spend a little time evaluating their own package lists in a similar way.

9 November 2020

Joachim Breitner: Distributing Haskell programs in a multi-platform zip file

My maybe most impactful piece of code is tttool and the surrounding project, which allows you to create your own content for the Ravensburger Tiptoi platform. The program itself is a command line tool, and in this blog post I want to show how I go about building that program for Linux (both normal and static builds), Windows (cross-compiled from Linux), OSX (only on CI), all combined into and released as a single zip file. Maybe some of it is useful or inspiring to my readers, or can even serve as a template. This being a blob post, though, note that it may become obsolete or outdated.

Ingredients I am building on the these components: Without the nix build system and package manger I probably woudn t even attempt to pull of complex tasks that may, say, require a patched ghc. For many years I resisted learning about nix, but when I eventually had to, I didn t want to go back. This project provides an alternative Haskell build infrastructure for nix. While this is not crucial for tttool, it helps that they tend to have some cross-compilation-related patches more than the official nixpkgs. I also like that it more closely follows the cabal build work-flow, where cabal calculates a build plan based on your projects dependencies. It even has decent documentation (which is a new thing compared to two years ago). Niv is a neat little tool to keep track of your dependencies. You can quickly update them with, say niv update nixpkgs. But what s really great is to temporarily replace one of your dependencies with a local checkout, e.g. via NIV_OVERRIDE_haskellNix=$HOME/build/haskell/haskell.nix nix-instantiate -A osx-exe-bundle There is a Github action that will keep your niv-managed dependencies up-to-date. This service (proprietary, but free to public stuff up to 10GB) gives your project its own nix cache. This means that build artifacts can be cached between CI builds or even build steps, and your contributors. A cache like this is a must if you want to use nix in more interesting ways where you may end up using, say, a changed GHC compiler. Comes with GitHub actions integration.
  • CI via Github actions
Until recently, I was using Travis, but Github actions are just a tad easier to set up and, maybe more important here, the job times are high enough that you can rebuild GHC if you have to, and even if your build gets canceled or times out, cleanup CI steps still happen, so that any new nix build products will still reach your nix cache.

The repository setup All files discussed in the following are reflected at https://github.com/entropia/tip-toi-reveng/tree/7020cde7da103a5c33f1918f3bf59835cbc25b0c. We are starting with a fairly normal Haskell project, with a single .cabal file (but multi-package projects should work just fine). To make things more interesting, I also have a cabal.project which configures one dependency to be fetched via git from a specific fork. To start building the nix infrastructure, we can initialize niv and configure it to use the haskell.nix repo:
niv init
niv add input-output-hk/haskell.nix -n haskellNix
This creates nix/sources.json (which you can also edit by hand) and nix/sources.nix (which you can treat like a black box). Now we can start writing the all-important default.nix file, which defines almost everything of interest here. I will just go through it line by line, and explain what I am doing here.
  checkMaterialization ? false  :
This defines a flag that we can later set when using nix-build, by passing --arg checkMaterialization true, and which is off by default. I ll get to that flag later.
let
  sources = import nix/sources.nix;
  haskellNix = import sources.haskellNix  ;
This imports the sources as defined niv/sources.json, and loads the pinned revision of the haskell.nix repository.
  # windows crossbuilding with ghc-8.10 needs at least 20.09.
  # A peek at https://github.com/input-output-hk/haskell.nix/blob/master/ci.nix can help
  nixpkgsSrc = haskellNix.sources.nixpkgs-2009;
  nixpkgsArgs = haskellNix.nixpkgsArgs;
  pkgs = import nixpkgsSrc nixpkgsArgs;
Now we can define pkgs, which is our version of the nixpkgs package set, extended with the haskell.nix machinery. We rely on haskell.nix to pin of a suitable revision of the nixpkgs set (see how we are using their niv setup). Here we could our own configuration, overlays, etc to nixpkgsArgs. In fact, we do in
  pkgs-osx = import nixpkgsSrc (nixpkgsArgs //   system = "x86_64-darwin";  );
to get the nixpkgs package set of an OSX machine.
  # a nicer filterSource
  sourceByRegex =
    src: regexes: builtins.filterSource (path: type:
      let relPath = pkgs.lib.removePrefix (toString src + "/") (toString path); in
      let match = builtins.match (pkgs.lib.strings.concatStringsSep " " regexes); in
      ( type == "directory"  && match (relPath + "/") != null
        match relPath != null)) src;
Next I define a little helper that I have been copying between projects, and which allows me to define the input to a nix derivation (i.e. a nix build job) with a set of regexes. I ll use that soon.
  tttool-exe = pkgs: sha256:
    (pkgs.haskell-nix.cabalProject  
The cabalProject function takes a cabal project and turns it into a nix project, running cabal v2-configure under the hood to let cabal figure out a suitable build plan. Since we want to have multiple variants of the tttool, this is so far just a function of two arguments pkgs and sha256, which will be explained in a bit.
      src = sourceByRegex ./. [
          "cabal.project"
          "src/"
          "src/.*/"
          "src/.*.hs"
          ".*.cabal"
          "LICENSE"
        ];
The cabalProject function wants to know the source of the Haskell projects. There are different ways of specifying this; in this case I went for a simple whitelist approach. Note that cabal.project.freze (which exists in the directory) is not included.
      # Pinning the input to the constraint solver
      compiler-nix-name = "ghc8102";
The cabal solver doesn t find out which version of ghc to use, that is still my choice. I am using GHC-8.10.2 here. It may require a bit of experimentation to see which version works for your project, especially when cross-compiling to odd targets.
      index-state = "2020-11-08T00:00:00Z";
I want the build to be deterministic, and not let cabal suddenly pick different package versions just because something got uploaded. Therefore I specify which snapshot of the Hackage package index it should consider.
      plan-sha256 = sha256;
      inherit checkMaterialization;
Here we use the second parameter, but I ll defer the explanation for a bit.
      modules = [ 
        # smaller files
        packages.tttool.dontStrip = false;
       ] ++
These modules are essentially configuration data that is merged in a structural way. Here we say that we want the tttool binary to be stripped (saves a few megabyte).
      pkgs.lib.optional pkgs.hostPlatform.isMusl  
        packages.tttool.configureFlags = [ "--ghc-option=-static" ];
Also, when we are building on the musl platform, that s when we want to produce a static build, so let s pass -static to GHC. This seems to be enough in terms of flags to produce static binaries. It helps that my project is using mostly pure Haskell libraries; if you link against C libraries you might have to jump through additional hoops to get static linking going. The haskell.nix documentation has a section on static building with some flags to cargo-cult.
        # terminfo is disabled on musl by haskell.nix, but still the flag
        # is set in the package plan, so override this
        packages.haskeline.flags.terminfo = false;
       ;
This (again only used when the platform is musl) seems to be necessary to workaround what might be a big in haskell.nix.
     ).tttool.components.exes.tttool;
The cabalProject function returns a data structure with all Haskell packages of the project, and for each package the different components (libraries, tests, benchmarks and of course executables). We only care about the tttool executable, so let s project that out.
  osx-bundler = pkgs: tttool:
   pkgs.stdenv.mkDerivation  
      name = "tttool-bundle";
      buildInputs = [ pkgs.macdylibbundler ];
      builder = pkgs.writeScript "tttool-osx-bundler.sh" ''
        source $ pkgs.stdenv /setup
        mkdir -p $out/bin/osx
        cp $ tttool /bin/tttool $out/bin/osx
        chmod u+w $out/bin/osx/tttool
        dylibbundler \
          -b \
          -x $out/bin/osx/tttool \
          -d $out/bin/osx \
          -p '@executable_path' \
          -i /usr/lib/system \
          -i $ pkgs.darwin.Libsystem /lib
      '';
     ;
This function, only to be used on OSX, takes a fully build tttool, finds all the system libraries it is linking against, and copies them next to the executable, using the nice macdylibbundler. This way we can get a self-contained executable. A nix expert will notice that this probably should be written with pkgs.runCommandNoCC, but then dylibbundler fails because it lacks otool. This should work eventually, though.
in rec  
  linux-exe      = tttool-exe pkgs
     "0rnn4q0gx670nzb5zp7xpj7kmgqjmxcj2zjl9jqqz8czzlbgzmkh";
  windows-exe    = tttool-exe pkgs.pkgsCross.mingwW64
     "01js5rp6y29m7aif6bsb0qplkh2az0l15nkrrb6m3rz7jrrbcckh";
  static-exe     = tttool-exe pkgs.pkgsCross.musl64
     "0gbkyg8max4mhzzsm9yihsp8n73zw86m3pwvlw8170c75p3vbadv";
  osx-exe        = tttool-exe pkgs-osx
     "0rnn4q0gx670nzb5zp7xpj7kmgqjmxcj2zjl9jqqz8czzlbgzmkh";
Time to create the four versions of tttool. In each case we use the tttool-exe function from above, passing the package set (pkgs, ) and a SHA256 hash. The package set is either the normal one, or it is one of those configured for cross compilation, building either for Windows or for Linux using musl, or it is the OSX package set that we instantiated earlier. The SHA256 hash describes the result of the cabal plan calculation that happens as part of cabalProject. By noting down the expected result, nix can skip that calculation, or fetch it from the nix cache etc. How do we know what number to put there, and when to change it? That s when the --arg checkMaterialization true flag comes into play: When that is set, cabalProject will not blindly trust these hashes, but rather re-calculate them, and tell you when they need to be updated. We ll make sure that CI checks them.
  osx-exe-bundle = osx-bundler pkgs-osx osx-exe;
For OSX, I then run the output through osx-bundler defined above, to make it independent of any library paths in /nix/store. This is already good enough to build the tool for the various systems! The rest of the the file is related to packaging up the binaries, to tests, and various other things, but nothing too essentially. So if you got bored, you can more or less stop now.
  static-files = sourceByRegex ./. [
    "README.md"
    "Changelog.md"
    "oid-decoder.html"
    "example/.*"
    "Debug.yaml"
    "templates/"
    "templates/.*\.md"
    "templates/.*\.yaml"
    "Audio/"
    "Audio/digits/"
    "Audio/digits/.*\.ogg"
  ];
  contrib = ./contrib;
The final zip file that I want to serve to my users contains a bunch of files from throughout my repository; I collect them here.
  book =  ;
The project comes with documentation in the form of a Sphinx project, which we build here. I ll omit the details, because they are not relevant for this post (but of course you can peek if you are curious).
  os-switch = pkgs.writeScript "tttool-os-switch.sh" ''
    #!/usr/bin/env bash
    case "$OSTYPE" in
      linux*)   exec "$(dirname "''$ BASH_SOURCE[0] ")/linux/tttool" "$@" ;;
      darwin*)  exec "$(dirname "''$ BASH_SOURCE[0] ")/osx/tttool" "$@" ;;
      msys*)    exec "$(dirname "''$ BASH_SOURCE[0] ")/tttool.exe" "$@" ;;
      cygwin*)  exec "$(dirname "''$ BASH_SOURCE[0] ")/tttool.exe" "$@" ;;
      *)        echo "unsupported operating system $OSTYPE" ;;
    esac
  '';
The zipfile should provide a tttool command that works on all systems. To that end, I implement a simple platform switch using bash. I use pks.writeScript so that I can include that file directly in default.nix, but it would have been equally reasonable to just save it into nix/tttool-os-switch.sh and include it from there.
  release = pkgs.runCommandNoCC "tttool-release"  
    buildInputs = [ pkgs.perl ];
    ''
    # check version
    version=$($ static-exe /bin/tttool --help perl -ne 'print $1 if /tttool-(.*) -- The swiss army knife/')
    doc_version=$(perl -ne "print \$1 if /VERSION: '(.*)'/" $ book /book.html/_static/documentation_options.js)
    if [ "$version" != "$doc_version" ]
    then
      echo "Mismatch between tttool version \"$version\" and book version \"$doc_version\""
      exit 1
    fi
Now the derivation that builds the content of the release zip file. First I double check that the version number in the code and in the documentation matches. Note how $ static-exe refers to a path with the built static Linux build, and $ book the output of the book building process.
    mkdir -p $out/
    cp -vsr $ static-files /* $out
    mkdir $out/linux
    cp -vs $ static-exe /bin/tttool $out/linux
    cp -vs $ windows-exe /bin/* $out/
    mkdir $out/osx
    cp -vsr $ osx-exe-bundle /bin/osx/* $out/osx
    cp -vs $ os-switch  $out/tttool
    mkdir $out/contrib
    cp -vsr $ contrib /* $out/contrib/
    cp -vsr $ book /* $out
  '';
The rest of the release script just copies files from various build outputs that we have defined so far. Note that this is using both static-exe (built on Linux) and osx-exe-bundle (built on Mac)! This means you can only build the release if you either have setup a remote osx builder (a pretty nifty feature of nix, which I unfortunately can t use, since I don't have access to a Mac), or the build product must be available in a nix cache (which it is in my case, as I will explain later). The output of this derivation is a directory with all the files I want to put in the release.
  release-zip = pkgs.runCommandNoCC "tttool-release.zip"  
    buildInputs = with pkgs; [ perl zip ];
    ''
    version=$(bash $ release /tttool --help perl -ne 'print $1 if /tttool-(.*) -- The swiss army knife/')
    base="tttool-$version"
    echo "Zipping tttool version $version"
    mkdir -p $out/$base
    cd $out
    cp -r $ release /* $base/
    chmod u+w -R $base
    zip -r $base.zip $base
    rm -rf $base
  '';
And now these files are zipped up. Note that this automatically determines the right directory name and basename for the zipfile. This concludes the step necessary for a release.
  gme-downloads =  ;
  tests =  ;
These two definitions in default.nix are related to some simple testing, and again not relevant for this post.
  cabal-freeze = pkgs.stdenv.mkDerivation  
    name = "cabal-freeze";
    src = linux-exe.src;
    buildInputs = [ pkgs.cabal-install linux-exe.env ];
    buildPhase = ''
      mkdir .cabal
      touch .cabal/config
      rm cabal.project # so that cabal new-freeze does not try to use HPDF via git
      HOME=$PWD cabal new-freeze --offline --enable-tests   true
    '';
    installPhase = ''
      mkdir -p $out
      echo "-- Run nix-shell -A check-cabal-freeze to update this file" > $out/cabal.project.freeze
      cat cabal.project.freeze >> $out/cabal.project.freeze
    '';
   ;
Above I mentioned that I still would like to be able to just run cabal, and ideally it should take the same library versions that the nix-based build does. But pinning the version of ghc in cabal.project is not sufficient, I also need to pin the precise versions of the dependencies. This is best done with a cabal.project.freeze file. The above derivation runs cabal new-freeze in the environment set up by haskell.nix and grabs the resulting cabal.project.freeze. With this I can run nix-build -A cabal-freeze and fetch the file from result/cabal.project.freeze and add it to the repository.
  check-cabal-freeze = pkgs.runCommandNoCC "check-cabal-freeze"  
      nativeBuildInputs = [ pkgs.diffutils ];
      expected = cabal-freeze + /cabal.project.freeze;
      actual = ./cabal.project.freeze;
      cmd = "nix-shell -A check-cabal-freeze";
      shellHook = ''
        dest=$ toString ./cabal.project.freeze 
        rm -f $dest
        cp -v $expected $dest
        chmod u-w $dest
        exit 0
      '';
      ''
      diff -r -U 3 $actual $expected  
          echo "To update, please run"; echo "nix-shell -A check-cabal-freeze"; exit 1;  
      touch $out
    '';
But generated files in repositories are bad, so if that cannot be avoided, at least I want a CI job that checks if they are up to date. This job does that. What s more, it is set up so that if I run nix-shell -A check-cabal-freeze it will update the file in the repository automatically, which is much more convenient than manually copying. Lately, I have been using this pattern regularly when adding generated files to a repository: * Create one nix derivation that creates the files * Create a second derivation that compares the output of that derivation against the file in the repo * Create a derivation that, when run in nix-shell, updates that file. Sometimes that derivation is its own file (so that I can just run nix-shell nix/generate.nix), or it is merged into one of the other two. This concludes the tour of default.nix.

The CI setup The next interesting bit is the file .github/workflows/build.yml, which tells Github Actions what to do:
name: "Build and package"
on:
  pull_request:
  push:
Standard prelude: Run the jobs in this file upon all pushes to the repository, and also on all pull requests. Annoying downside: If you open a PR within your repository, everything gets built twice. Oh well.
jobs:
  build:
    strategy:
      fail-fast: false
      matrix:
        include:
        - target: linux-exe
          os: ubuntu-latest
        - target: windows-exe
          os: ubuntu-latest
        - target: static-exe
          os: ubuntu-latest
        - target: osx-exe-bundle
          os: macos-latest
    runs-on: $  matrix.os  
The build job is a matrix job, i.e. there are four variants, one for each of the different tttool builds, together with an indication of what kind of machine to run this on.
    - uses: actions/checkout@v2
    - uses: cachix/install-nix-action@v12
We begin by checking out the code and installing nix via the install-nix-action.
    - name: "Cachix: tttool"
      uses: cachix/cachix-action@v7
      with:
        name: tttool
        signingKey: '$  secrets.CACHIX_SIGNING_KEY  '
Then we configure our Cachix cache. This means that this job will use build products from the cache if possible, and it will also push new builds to the cache. This requires a secret key, which you get when setting up your Cachix cache. See the nix and Cachix tutorial for good instructions.
    - run: nix-build --arg checkMaterialization true -A $  matrix.target  
Now we can actually run the build. We set checkMaterialization to true so that CI will tell us if we need to update these hashes.
    # work around https://github.com/actions/upload-artifact/issues/92
    - run: cp -RvL result upload
    - uses: actions/upload-artifact@v2
      with:
        name: tttool ($  matrix.target  )
        path: upload/
For convenient access to build products, e.g. from pull requests, we store them as Github artifacts. They can then be downloaded from Github s CI status page.
  test:
    runs-on: ubuntu-latest
    needs: build
    steps:
    - uses: actions/checkout@v2
    - uses: cachix/install-nix-action@v12
    - name: "Cachix: tttool"
      uses: cachix/cachix-action@v7
      with:
        name: tttool
        signingKey: '$  secrets.CACHIX_SIGNING_KEY  '
    - run: nix-build -A tests
The next job repeats the setup, but now runs the tests. Because of needs: build it will not start before the builds job has completed. This also means that it should get the actual tttool executable to test from our nix cache.
  check-cabal-freeze:
    runs-on: ubuntu-latest
    steps:
    - uses: actions/checkout@v2
    - uses: cachix/install-nix-action@v12
    - name: "Cachix: tttool"
      uses: cachix/cachix-action@v7
      with:
        name: tttool
        signingKey: '$  secrets.CACHIX_SIGNING_KEY  '
    - run: nix-build -A check-cabal-freeze
The same, but now running the check-cabal-freeze test mentioned above. Quite annoying to repeat the setup instructions for each job
  package:
    runs-on: ubuntu-latest
    needs: build
    steps:
    - uses: actions/checkout@v2
    - uses: cachix/install-nix-action@v12
    - name: "Cachix: tttool"
      uses: cachix/cachix-action@v7
      with:
        name: tttool
        signingKey: '$  secrets.CACHIX_SIGNING_KEY  '
    - run: nix-build -A release-zip
    - run: unzip -d upload ./result/*.zip
    - uses: actions/upload-artifact@v2
      with:
        name: Release zip file
        path: upload
Finally, with the same setup, but slightly different artifact upload, we build the release zip file. Again, we wait for build to finish so that the built programs are in the nix cache. This is especially important since this runs on linux, so it cannot build the OSX binary and has to rely on the cache. Note that we don t need to checkMaterialization again. Annoyingly, the upload-artifact action insists on zipping the files you hand to it. A zip file that contains just a zipfile is kinda annoying, so I unpack the zipfile here before uploading the contents.

Conclusion With this setup, when I do a release of tttool, I just bump the version numbers, wait for CI to finish building, run nix-build -A release-zip and upload result/tttool-n.m.zip. A single file that works on all target platforms. I have not yet automated making the actual release, but with one release per year this is fine. Also, when trying out a new feature, I can easily create a branch or PR for that and grab the build products from Github s CI, or ask people to try them out (e.g. to see if they fixed their bugs). Note, though, that you have to sign into Github before being able to download these artifacts. One might think that this is a fairly hairy setup finding the right combinations of various repertories so that cross-compilation works as intended. But thanks to nix s value propositions, this does work! The setup presented here was a remake of a setup I did two years ago, with a much less mature haskell.nix. Back then, I committed a fair number of generated files to git, and juggled more complex files but once it worked, it kept working for two years. I was indeed insulated from upstream changes. I expect that this setup will also continue to work reliably, until I choose to upgrade it again. Hopefully, then things are even more simple, and require less work-around or manual intervention.

2 September 2020

Vincent Bernat: Syncing SSH keys on Cisco IOS-XR with a custom Ansible module

The cisco.iosxr collection from Ansible Galaxy provides an iosxr_user module to manage local users, along with their SSH keys. However, the module is quite slow, do not display a diff for changed SSH keys, never signal change when a key is modified, and does not delete obsolete keys. Let s write a custom Ansible module managing only the SSH keys while fixing these issues.

Notice I recommend that you read Writing a custom Ansible module as an introduction.

How to add an SSH key to a user Adding SSH keys to users in Cisco IOS-XR is quite undocumented. First, you need to encode the key with the ssh-rsa key ASN.1 format, like an OpenSSH public key, but without the base64-encoding:
$ awk ' print $2 ' id_rsa.pub \
      base64 -d \
    > publickey_vincent.raw
Then, you upload the key with SCP to harddisk:/publickey_vincent.raw and import it for the current user with the following IOS command:
crypto key import authentication rsa harddisk:/publickey_vincent.b64
However, if you want to import a key for another user, you need to be part of the root-system group:
username vincent
 group root-lr
 group root-system
With the following admin command, you can attach a key to another user:
admin crypto key import authentication rsa username cedric harddisk:/publickey_cedric.b64

Code The module has the following signature and it installs the specified key for each user and remove keys from retired users the ones we do not specify.
iosxr_users:
  keys:
    vincent: ssh-rsa AAAAB3NzaC1yc2EAA[ ]ymh+YrVWLZMJR
    cedric:  ssh-rsa AAAAB3NzaC1yc2EAA[ ]RShPA8w/8eC0n

Prerequisites Unlike the iosxr_user module, our custom module only handles SSH keys, one per user. Therefore, the user definitions have to already exist in the running configuration.1 Moreover, the user defined in ansible_user needs to be in the root-system group. The cisco.iosxr collection must also be installed as the module relies on its code. When running the module, ansible_connection needs to be set to network_cli and ansible_network_os to iosxr. These variables are usually defined in the inventory.

Module definition Starting from the skeleton described in the previous article, we define the module:
module_args = dict(
    keys=dict(type='dict', elements='str', required=True),
)
module = AnsibleModule(
    argument_spec=module_args,
    supports_check_mode=True
)
result = dict(
    changed=False
)

Getting the installed keys The next step is to retrieve the keys currently installed. This can be done with the following command:
# show crypto key authentication rsa all
Key label: vincent
Type     : RSA public key authentication
Size     : 2048
Imported : 16:17:08 UTC Tue Aug 11 2020
Data     :
 30820122 300D0609 2A864886 F70D0101 01050003 82010F00 3082010A 02820101
 00D81E5B A73D82F3 77B1E4B5 949FB245 60FB9167 7CD03AB7 ADDE7AFE A0B83174
 A33EC0E6 1C887E02 2338367A 8A1DB0CE 0C3FBC51 15723AEB 07F301A4 B1A9961A
 2D00DBBD 2ABFC831 B0B25932 05B3BC30 B9514EA1 3DC22CBD DDCA6F02 026DBBB6
 EE3CFADA AFA86F52 CAE7620D 17C3582B 4422D24F D68698A5 52ED1E9E 8E41F062
 7DE81015 F33AD486 C14D0BB1 68C65259 F9FD8A37 8DE52ED0 7B36E005 8C58516B
 7EA6C29A EEE0833B 42714618 50B3FFAC 15DBE3EF 8DA5D337 68DAECB9 904DE520
 2D627CEA 67E6434F E974CF6D 952AB2AB F074FBA3 3FB9B9CC A0CD0ADC 6E0CDB2A
 6A1CFEBA E97AF5A9 1FE41F6C 92E1F522 673E1A5F 69C68E11 4A13C0F3 0FFC782D
 27020301 0001
[ ]
ansible_collections.cisco.iosxr.plugins.module_utils.network.iosxr.iosxr contains a run_commands() function we can use:
command = "show crypto key authentication rsa all"
out = run_commands(module, command)
out = out[0].replace(' \n', '\n')
A common library to parse a command output is textfsm: a Python module using a template-based state machine for parsing semi-formatted text.
template = r"""
Value Required Label (\w+)
Value Required,List Data ([A-F0-9 ]+)
Start
 ^Key label: $ Label 
 ^Data\s+: -> GetData
GetData
 ^ $ Data 
 ^$$ -> Record Start
""".lstrip()
re_table = textfsm.TextFSM(io.StringIO(template))
got =  data[0]: "".join(data[1]).replace(' ', '')
       for data in re_table.ParseText(out) 
got is a dictionary associating key labels, considered as usernames, with a hexadecimal representation of the public key currently installed. It looks like this:
>>> pprint(got)
 'alfred': '30820122300D0609[ ]6F0203010001',
 'cedric': '30820122300D0609[ ]710203010001',
 'vincent': '30820122300D0609[ ]270203010001' 

Comparing with the wanted keys Let s now build the wanted dictionary using the same structure. In module.params['keys'], we have a dictionary associating usernames to public SSH keys in the OpenSSH format:
>>> pprint(module.params['keys'])
 'cedric': 'ssh-rsa AAAAB3NzaC1yc2[ ]',
 'vincent': 'ssh-rsa AAAAB3NzaC1yc2[ ]' 
We need to convert these keys in the same hexadecimal representation used by Cisco above. The ssh-keygen command and some glue can do the conversion:2
$ ssh-keygen -f id_rsa.pub -e -mPKCS8 \
     grep -v '^---' \
     base64 -d \
     hexdump -e '4/1 "%0.2X"'
30820122300D06092[ ]782D270203010001
Assuming we have a ssh2cisco() function doing that, we can build the wanted dictionary:
wanted =  k: ssh2cisco(v)
          for k, v in module.params['keys'].items() 

Applying changes Back to the skeleton described in the previous article, the last step is to apply the changes if there is a difference between got and wanted when not running with check mode. The part comparing got and wanted is taken verbatim from the skeleton module:
if got != wanted:
    result['changed'] = True
    result['diff'] = dict(
        before=yaml.safe_dump(got),
        after=yaml.safe_dump(wanted)
    )
if module.check_mode or not result['changed']:
    module.exit_json(**result)
Let s copy the new or changed keys and attach them to their respective users. For this purpose, we reuse the get_connection() and copy_file() functions from ansible_collections.cisco.iosxr.plugins.module_utils.network.iosxr.iosxr.
conn = get_connection(module)
for user in wanted:
    if user not in got or wanted[user] != got[user]:
        dst = f"/harddisk:/publickey_ user .raw"
        with tempfile.NamedTemporaryFile() as src:
            decoded = base64.b64decode(
                module.params['keys'][user].split()[1])
            src.write(decoded)
            src.flush()
            copy_file(module, src.name, dst)
    command = ("admin crypto key import authentication rsa "
               f"username  user   dst ")
    conn.send_command(command, prompt="yes/no", answer="yes")
Then, we remove obsolete keys:
for user in got:
    if user not in wanted:
        command = ("admin crypto key zeroize authentication rsa "
                   f"username  user ")
        conn.send_command(command, prompt="yes/no", answer="yes")

The complete code is available on GitHub. Compared to the iosxr_user module, this one displays a diff when running with --diff, correctly signals a change, is faster, 3 and deletes unwanted SSH keys. However, it is unable to create users and cannot configure passwords or multiple SSH keys.

  1. In our environment, the Ansible playbook pushes a full configuration, including the user definitions. Then, it synchronizes the SSH keys.
  2. Despite the argument provided to ssh-keygen, the format used by Cisco is not PKCS#8. This is the ASN.1 representation of a Subject Public Key Info structure, as defined in RFC 2459. Moreover, PKCS#8 is a format for a private key, not a public one.
  3. The main factors for being faster are:
    • not creating users, and
    • not reuploading existing SSH keys.

1 September 2020

Paul Wise: FLOSS Activities August 2020

Focus This month I didn't have any particular focus. I just worked on issues in my info bubble.

Changes

Issues

Review

Administration
  • Debian: restarted RAM eating service
  • Debian wiki: unblock IP addresses, approve accounts

Sponsors The cython-blis/preshed/thinc/theano bugs and smart-open/python-importlib-metadata/python-pyfakefs/python-zipp/python-threadpoolctl backports were sponsored by my employer. All other work was done on a volunteer basis.

22 August 2020

Jelmer Vernooij: Debian Janitor: > 60,000 Lintian Issues Automatically Fixed

The Debian Janitor is an automated system that commits fixes for (minor) issues in Debian packages that can be fixed by software. It gradually started proposing merges in early December. The first set of changes sent out ran lintian-brush on sid packages maintained in Git. This post is part of a series about the progress of the Janitor.

Scheduling Lintian Fixes To determine which packages to process, the Janitor looks at the import of lintian output across the archive that is available in UDD [1]. It will prioritize those packages with the most and more severe issues that it has fixers for. Once a package is selected, it will clone the packaging repository and run lintian-brush on it. Lintian-brush provides a framework for applying a set of fixers to a package. It will run each of a set of fixers in a pristine version of the repository, and handles most of the heavy lifting.
The Inner Workings of a Fixer Each fixer is just an executable which gets run in a clean checkout of the package, and can make changes there. Most of the fixers are written in Python or shell, but they can be in any language. The contract for fixers is pretty simple:
  • If the fixer exits with non-zero, the changes are reverted and fixer is considered to have failed
  • If it exits with zero and made changes, then it should write a summary of its changes to standard out
If a fixer is uncertain about the changes it has made, it should report so on standard output using a pseudo-header. By default, lintian-brush will discard any changes with uncertainty but if you are running it locally you can still apply them by specifying --uncertain. The summary message on standard out will be used for the commit message and (possibly) the changelog message, if the package doesn t use gbp dch.
Example Fixer Let s look at an example. The package priority extra is deprecated since Debian Policy 4.0.1 (released August 2 017) see Policy 2.5 "Priorities". Instead, most packages should use the optional priority. Lintian will warn when a package uses the deprecated extra value for the Priority - the associated tag is priority-extra-is-replaced-by-priority-optional. Lintian-brush has a fixer script that can automatically replace extra with optional . On systems that have lintian-brush installed, the source for the fixer lives in /usr/share/lintian-brush/fixers/priority-extra-is-replaced-by-priority-optional.py, but here is a copy of it for reference:
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
#!/usr/bin/python3
from debmutate.control import ControlEditor
from lintian_brush.fixer import report_result, fixed_lintian_tag
with ControlEditor() as updater:
    for para in updater.paragraphs:
        if para.get("Priority") == "extra":
            para["Priority"] = "optional"
            fixed_lintian_tag(
                para, 'priority-extra-is-replaced-by-priority-optional')
report_result("Change priority extra to priority optional.")
This fixer is written in Python and uses the debmutate library to easily modify control files while preserving formatting or back out if it is not possible to preserve formatting. All the current fixers come with tests, e.g. for this particular fixer the tests can be found here: https://salsa.debian.org/jelmer/lintian-brush/-/tree/master/tests/priority-extra-is-replaced-by-priority-optional. For more details on writing new fixers, see the README for lintian-brush. For more details on debugging them, see the manual page.
Successes by fixer Here is a list of the fixers currently available, with the number of successful merges/pushes per fixer:
Lintian Tag Previously merged/pushed Ready but not yet merged/pushed
uses-debhelper-compat-file 4906 4161
upstream-metadata-file-is-missing 4281 3841
package-uses-old-debhelper-compat-version 4256 3617
upstream-metadata-missing-bug-tracking 2438 2995
out-of-date-standards-version 2062 2936
upstream-metadata-missing-repository 1936 2987
trailing-whitespace 1720 2295
insecure-copyright-format-uri 1791 1093
package-uses-deprecated-debhelper-compat-version 1391 1287
vcs-obsolete-in-debian-infrastructure 872 782
homepage-field-uses-insecure-uri 527 1111
vcs-field-not-canonical 850 655
debian-changelog-has-wrong-day-of-week 224 376
debian-watch-uses-insecure-uri 314 242
useless-autoreconf-build-depends 112 428
priority-extra-is-replaced-by-priority-optional 315 194
debian-rules-contains-unnecessary-get-orig-source-target 35 428
tab-in-license-text 125 320
debian-changelog-line-too-long 186 190
debian-rules-sets-dpkg-architecture-variable 69 166
debian-rules-uses-unnecessary-dh-argument 42 182
package-lacks-versioned-build-depends-on-debhelper 125 95
unversioned-copyright-format-uri 43 136
package-needs-versioned-debhelper-build-depends 127 50
binary-control-field-duplicates-source 34 134
renamed-tag 73 69
vcs-field-uses-insecure-uri 14 109
uses-deprecated-adttmp 13 91
debug-symbol-migration-possibly-complete 12 88
copyright-refers-to-symlink-license 51 48
debian-control-has-unusual-field-spacing 33 66
old-source-override-location 32 62
out-of-date-copyright-format 20 62
public-upstream-key-not-minimal 43 30
older-source-format 17 54
custom-compression-in-debian-source-options 12 57
copyright-refers-to-versionless-license-file 29 39
tab-in-licence-text 33 31
global-files-wildcard-not-first-paragraph-in-dep5-copyright 28 33
out-of-date-copyright-format-uri 9 50
field-name-typo-dep5-copyright 29 29
copyright-does-not-refer-to-common-license-file 13 42
debhelper-but-no-misc-depends 9 45
debian-watch-file-is-missing 11 41
debian-control-has-obsolete-dbg-package 8 40
possible-missing-colon-in-closes 31 13
unnecessary-testsuite-autopkgtest-field 32 9
missing-debian-source-format 7 33
debhelper-tools-from-autotools-dev-are-deprecated 9 29
vcs-field-mismatch 8 29
debian-changelog-file-contains-obsolete-user-emacs-setting 33 0
patch-file-present-but-not-mentioned-in-series 24 9
copyright-refers-to-versionless-license-file 22 9
debian-control-has-empty-field 25 6
missing-build-dependency-for-dh-addon 10 20
obsolete-field-in-dep5-copyright 15 13
xs-testsuite-field-in-debian-control 20 7
ancient-python-version-field 13 12
unnecessary-team-upload 19 5
misspelled-closes-bug 6 16
field-name-typo-in-dep5-copyright 1 20
transitional-package-not-oldlibs-optional 4 17
maintainer-script-without-set-e 9 11
dh-clean-k-is-deprecated 4 14
no-dh-sequencer 14 4
missing-vcs-browser-field 5 12
space-in-std-shortname-in-dep5-copyright 6 10
xc-package-type-in-debian-control 4 11
debian-rules-missing-recommended-target 4 10
desktop-entry-contains-encoding-key 1 13
build-depends-on-obsolete-package 4 9
license-file-listed-in-debian-copyright 1 12
missing-built-using-field-for-golang-package 9 4
unused-license-paragraph-in-dep5-copyright 4 7
missing-build-dependency-for-dh_command 6 4
comma-separated-files-in-dep5-copyright 3 6
systemd-service-file-refers-to-var-run 4 5
copyright-not-using-common-license-for-apache2 3 5
debian-tests-control-autodep8-is-obsolete 2 6
dh-quilt-addon-but-quilt-source-format 2 6
no-homepage-field 3 5
font-packge-not-multi-arch-foreign 1 6
homepage-in-binary-package 1 4
vcs-field-bitrotted 1 3
built-using-field-on-arch-all-package 2 1
copyright-should-refer-to-common-license-file-for-apache-2 1 2
debian-pyversions-is-obsolete 3 0
debian-watch-file-uses-deprecated-githubredir 1 1
executable-desktop-file 1 1
skip-systemd-native-flag-missing-pre-depends 1 1
vcs-field-uses-not-recommended-uri-format 1 1
init.d-script-needs-depends-on-lsb-base 1 0
maintainer-also-in-uploaders 1 0
public-upstream-keys-in-multiple-locations 1 0
wrong-debian-qa-group-name 1 0
Total 29656 32209

Footnotes
[1]temporarily unavailable due to Debian bug #960156 but the Janitor is relying on historical data

For more information about the Janitor's lintian-fixes efforts, see the landing page

Next.

Previous.